4.5 Article

Comparing data quality from an online and in-person lab sample on dynamic theory of mind tasks

期刊

BEHAVIOR RESEARCH METHODS
卷 -, 期 -, 页码 -

出版社

SPRINGER
DOI: 10.3758/s13428-023-02152-y

关键词

Theory of mind; Crowdsourcing; Social cognition

向作者/读者索取更多资源

Nearly half of psychology research uses online samples that heavily rely on self-report measures. This study validates the quality of data from an online sample by comparing its performance with an in-lab sample on two dynamic measures of theory of mind. The results suggest that crowdsourcing platforms can reliably capture performance on novel, dynamic, and complex tasks.
Nearly half the published research in psychology is conducted with online samples, but the preponderance of these studies rely primarily on self-report measures. The current study validated data quality from an online sample on a novel, dynamic task by comparing performance between an in-lab and online sample on two dynamic measures of theory of mind-the ability to infer others' mental states. Theory of mind is a cognitively complex construct that has been widely studied across multiple domains of psychology. One task was based on the show The Office (R), and has been previously validated by the authors with in-lab samples. The second was a novel task based on the show Nathan for You (R), which was selected to account for familiarity effects associated with The Office. Both tasks measured various dimensions of theory of mind (inferring beliefs, understanding motivations, detecting deception, identifying faux pas, and understanding emotions). The in-person lab samples (N = 144 and 177, respectively) completed the tasks between-subject, whereas the online sample (N = 347 from Prolific Academic) completed them within-subject, with order counterbalanced. The online sample's performance across both tasks was reliable (Cronbach's alpha = .66). For The Office, the in-person sample outperformed the online sample on some types of theory of mind, but this was driven by their greater familiarity with the show. Indeed, for the relatively unfamiliar show Nathan for You, performance did not differ between the two samples. Together, these results suggest that crowdsourcing platforms elicit reliable performance on novel, dynamic, complex tasks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据