4.7 Article

Crowdsourcing research questions in science

期刊

RESEARCH POLICY
卷 51, 期 4, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.respol.2022.104491

关键词

Crowd science; Citizen science; Crowdsourcing; Problem solving; Problem finding; Agenda setting; Organization of science

资金

  1. Austrian National Foundation for Research, Technology and Development
  2. Open Innovation in Science

向作者/读者索取更多资源

Scientists are increasingly involving the general public in their research, particularly in empirical work. However, it is unclear how the public can be involved in conceptual stages such as formulating research questions. This study analyzes data from two crowdsourcing projects in the medical sciences and finds that crowd contributions mainly restate problems, providing little guidance in terms of potential causes or solutions. Nevertheless, crowd-generated research questions frequently combine elements from different fields and have comparable practical impact. Professional evaluations suggest that crowd contributions have lower novelty and potential scientific impact than professional research questions, but outperform them when selection mechanisms are applied.
Scientists are increasingly crossing the boundaries of the professional system by involving the general public (the crowd) directly in their research. However, this crowd involvement tends to be confined to empirical work and it is not clear whether and how crowds can also be involved in conceptual stages such as formulating the questions that research is trying to address. Drawing on five different paradigms of crowdsourcing and related mechanisms, we first discuss potential merits of involving crowds in the formulation of research questions (RQs). We then analyze data from two crowdsourcing projects in the medical sciences to describe key features of RQs generated by crowd members and compare the quality of crowd contributions to that of RQs generated in the conventional scientific process. We find that the majority of crowd contributions are problem restatements that can be useful to assess problem importance but provide little guidance regarding potential causes or solutions. At the same time, crowd-generated research questions frequently cross disciplinary boundaries by combining elements from different fields within and especially outside medicine. Using evaluations by professional scientists, we find that the average crowd contribution has lower novelty and potential scientific impact than professional research questions, but comparable practical impact. Crowd contributions outperform professional RQs once we apply selection mechanisms at the level of individual contributors or across contributors. Our findings advance research on crowd and citizen science, crowdsourcing and distributed knowledge production, as well as the organization of science. We also inform ongoing policy debates around the involvement of citizens in research in general, and agenda setting in particular.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据