期刊
2015 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV)
卷 -, 期 -, 页码 1169-1176出版社
IEEE
DOI: 10.1109/WACV.2015.160
关键词
-
Analyses of biomedical images often rely on demarcating the boundaries of biological structures (segmentation). While numerous approaches are adopted to address the segmentation problem including collecting annotations from domain-experts and automated algorithms, the lack of comparative benchmarking makes it challenging to determine the current state-of-art, recognize limitations of existing approaches, and identify relevant future research directions. To provide practical guidance, we evaluated and compared the performance of trained experts, crowdsourced non-experts, and algorithms for annotating 305 objects coming from six datasets that include phase contrast, fluorescence, and magnetic resonance images. Compared to the gold standard established by expert consensus, we found the best annotators were experts, followed by non-experts, and then algorithms. This analysis revealed that online paid crowdsourced workers without domain-specific backgrounds are reliable annotators to use as part of the laboratory protocol for segmenting biomedical images. We also found that fusing the segmentations created by crowdsourced internet workers and algorithms yielded improved segmentation results over segmentations created by single crowdsourced or algorithm annotations respectively. We invite extensions of our work by sharing our data sets and associated segmentation annotations (http://www.cs.bu.edu/similar to betke/BiomedicalImageSegmentation).
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据