3.8 Proceedings Paper

Measuring Annotator Agreement Generally across Complex Structured, Multi-object, and Free-text Annotation Tasks

期刊

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3485447.3512242

关键词

annotation; labeling; inter-annotator agreement; quality assurance

资金

  1. Knight Foundation
  2. Micron Foundation
  3. Good Systems, a UT Austin Grand Challenge to develop responsible AI technologies

向作者/读者索取更多资源

This study investigates and proposes new measures for assessing inter-annotator agreement (IAA) in complex labeling tasks.
When annotators label data, a key metric for quality assurance is inter-annotator agreement (IAA): the extent to which annotators agree on their labels. Though many IAA measures exist for simple categorical and ordinal labeling tasks, relatively little work has considered more complex labeling tasks, such as structured, multi-object, and free-text annotations. Krippendorff's a, best known for use with simpler labeling tasks, does have a distance-based formulation with broader applicability, but little work has studied its efficacy and consistency across complex annotation tasks. We investigate the design and evaluation of IAA measures for complex annotation tasks, with evaluation spanning seven diverse tasks: image bounding boxes, image keypoints, text sequence tagging, ranked lists, free text translations, numeric vectors, and syntax trees. We identify the difficulty of interpretability and the complexity of choosing a distance function as key obstacles in applying Krippendorff's a generally across these tasks. We propose two novel, more interpretable measures, showing they yield more consistent IAA measures across tasks and annotation distance functions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据