Journal
COMPUTERS IN HUMAN BEHAVIOR
Volume 29, Issue 6, Pages 2156-2160Publisher
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.chb.2013.05.009
Keywords
MTurk; Social media; Methodology; Recruitment; Crowd-sourcing
Ask authors/readers for more resources
Recent and emerging technology permits psychologists today to recruit and test participants in more ways than ever before. But to what extent can behavioral scientists trust these varied methods to yield reasonably equivalent results? Here, we took a behavioral, face-to-face task and converted it to an online test. We compared the online responses of participants recruited via Amazon's Mechanical Turk (MTurk) and via social media postings on Twitter, Facebook, and Reddit. We also recruited a standard sample of students on a college campus and tested them in person, not via computer interface. The demographics of the three samples differed, with MTurk participants being significantly more socio-economically and ethnically diverse, yet the test results across the three samples were almost indistinguishable. We conclude that for some behavioral tests, online recruitment and testing can be a valid-and sometimes even superior-partner to in-person data collection. (c) 2013 Elsevier Ltd. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available