4.6 Article

U2F-GAN: Weakly Supervised Super-pixel Segmentation in Thyroid Ultrasound Images

Journal

COGNITIVE COMPUTATION
Volume 13, Issue 5, Pages 1099-1113

Publisher

SPRINGER
DOI: 10.1007/s12559-021-09909-7

Keywords

Thyroid nodule segmentation; Ultrasound images; Weakly supervised generative adversarial network; Super-pixel processing mechanism; Similarity comparison module; Distributed loss function with constraints

Funding

  1. National Natural Science Foundation of China [61871135, 81830058, 81627804]
  2. Science and Technology Commission of Shanghai Municipality [18511102904, 20DZ1100104]

Ask authors/readers for more resources

A weakly supervised framework called U2F-GAN is proposed for nodule segmentation in thyroid ultrasound images, using only a handful of rough bounding box annotations to generate reliable labels. By alternating between generating masks and learning a segmentation network adversarially, this method effectively removes noise in localization annotations and enhances the network's generalization capability, resulting in a significant performance improvement in segmentation.
Precise nodule segmentation in thyroid ultrasound images is important for clinical quantitative analysis and diagnosis. Fully supervised deep learning method can effectively extract representative features from nodules and background. Despite the great success, deep learning-based segmentation methods still face a critical hindrance: the difficulty in acquiring sufficient training data due to high annotation costs. To this end, we propose a weakly supervised framework called uncertainty to fine generative adversarial network (U2F-GAN) for nodule segmentation in thyroid ultrasound images that exploits only a handful of rough bounding box annotations to successfully generate reliable labels from these weak supervisions. Based on feature-matching GAN, the proposed method alternates between generating masks and learning a segmentation network in an adversarial manner. Super-pixel processing mechanism is adopted to reflect low-level image structure features for learning and inferring semantic segmentation, which largely improve the efficiency of training process. In addition, we introduce a similarity comparison module and a distributed loss function with constraints to effectively remove noise in localization annotations and enhance the generalization capability of the network, thus strengthen the overall segmentation performance. Compared to existing weakly supervised approaches, our proposed U2F-GAN yields a significant performance boost. The segmentation results are also comparable to fully supervised methods, but the annotation burden is much lower. Also, the training speed of the network model is much faster than other methods with weak supervisions, which enables the network to be updated in time, thus is beneficial to high-throughput medical image setting.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available