4.7 Article

Nuclei segmentation with point annotations from pathology images via self-supervised learning and co-training

期刊

MEDICAL IMAGE ANALYSIS
卷 89, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.media.2023.102933

关键词

Nuclei segmentation; Weakly-supervised; Point annotation; Self-supervised learning; Co-training

向作者/读者索取更多资源

This paper proposes a weakly-supervised learning method for nuclei segmentation using point annotations for training. The method improves the segmentation performance through the derivation of coarse pixel-level labels, co-training strategy, and self supervised visual representation learning. Experimental results demonstrate the superior segmentation performance of the proposed method compared to other methods, and its competitive performance compared to fully-supervised methods.
Nuclei segmentation is a crucial task for whole slide image analysis in digital pathology. Generally, the segmentation performance of fully-supervised learning heavily depends on the amount and quality of the annotated data. However, it is time-consuming and expensive for professional pathologists to provide accurate pixel-level ground truth, while it is much easier to get coarse labels such as point annotations. In this paper, we propose a weakly-supervised learning method for nuclei segmentation that only requires point annotations for training. First, coarse pixel-level labels are derived from the point annotations based on the Voronoi diagram and the k-means clustering method to avoid overfitting. Second, a co-training strategy with an exponential moving average method is designed to refine the incomplete supervision of the coarse labels. Third, a self supervised visual representation learning method is tailored for nuclei segmentation of pathology images that transforms the hematoxylin component images into the H&E stained images to gain better understanding of the relationship between the nuclei and cytoplasm. We comprehensively evaluate the proposed method using two public datasets. Both visual and quantitative results demonstrate the superiority of our method to the state-of-the-art methods, and its competitive performance compared to the fully-supervised methods. Codes are available at https://github.com/hust-linyi/SC-Net.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据