3.8 Proceedings Paper

How to Collect Segmentations for Biomedical Images? A Benchmark Evaluating the Performance of Experts, Crowdsourced Non-Experts, and Algorithms

向作者/读者索取更多资源

Analyses of biomedical images often rely on demarcating the boundaries of biological structures (segmentation). While numerous approaches are adopted to address the segmentation problem including collecting annotations from domain-experts and automated algorithms, the lack of comparative benchmarking makes it challenging to determine the current state-of-art, recognize limitations of existing approaches, and identify relevant future research directions. To provide practical guidance, we evaluated and compared the performance of trained experts, crowdsourced non-experts, and algorithms for annotating 305 objects coming from six datasets that include phase contrast, fluorescence, and magnetic resonance images. Compared to the gold standard established by expert consensus, we found the best annotators were experts, followed by non-experts, and then algorithms. This analysis revealed that online paid crowdsourced workers without domain-specific backgrounds are reliable annotators to use as part of the laboratory protocol for segmenting biomedical images. We also found that fusing the segmentations created by crowdsourced internet workers and algorithms yielded improved segmentation results over segmentations created by single crowdsourced or algorithm annotations respectively. We invite extensions of our work by sharing our data sets and associated segmentation annotations (http://www.cs.bu.edu/similar to betke/BiomedicalImageSegmentation).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据