4.1 Article

Perspectives on crowdsourcing annotations for natural language processing

期刊

LANGUAGE RESOURCES AND EVALUATION
卷 47, 期 1, 页码 9-31

出版社

SPRINGER
DOI: 10.1007/s10579-012-9176-1

关键词

Human computation; Crowdsourcing; NLP; Wikipedia; Mechanical Turk; Games with a purpose; Annotation

资金

  1. CSIDM from the National Research Foundation (NRF) [CSIDM-200805]

向作者/读者索取更多资源

Crowdsourcing has emerged as a new method for obtaining annotations for training models for machine learning. While many variants of this process exist, they largely differ in their methods of motivating subjects to contribute and the scale of their applications. To date, there has yet to be a study that helps the practitioner to decide what form an annotation application should take to best reach its objectives within the constraints of a project. To fill this gap, we provide a faceted analysis of crowdsourcing from a practitioner's perspective, and show how our facets apply to existing published crowdsourced annotation applications. We then summarize how the major crowdsourcing genres fill different parts of this multi-dimensional space, which leads to our recommendations on the potential opportunities crowdsourcing offers to future annotation efforts.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.1
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据