4.6 Article

Bin loss for hard exudates segmentation in fundus images

Journal

NEUROCOMPUTING
Volume 392, Issue -, Pages 314-324

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2018.10.103

Keywords

Hard exudates segmentation; Diabetic retinopathy; Bin loss; Deep learning; Fundus image

Funding

  1. National Natural Science Foundation [61872200]
  2. National Key Research and Development Program of China [2018YFB1003405, 2016YFC0400709]
  3. Natural Science Foundation of Tianjin [18YFYZCG00060]

Ask authors/readers for more resources

Diabetic retinopathy is one of the leading reasons that causes blindness. And the segmentation of hard exudates in color fundus images is crucial for early diagnosis of diabetic retinopathy, which is a difficult task due to its uncertainty in size, shape and contrast. Class-balanced cross entropy (CBCE) loss is the most popular objective function for image segmentation task to solve the class-unbalance problem. However, we show that background pixels tend to be misclassified to hard exudates in CBCE since the loss for a misclassified background pixels is much smaller than that for a misclassified hard exudate pixel, which is called loss-unbalance problem here. A top-k loss is proposed in this paper, which considers the cases of both class-unbalance and loss-unbalance by focusing more over the hard-to-classify pixels. Moreover, a fast version of the top-k loss, named bin loss, is implemented for efficiency, which reduces the time complexity from O(nlog n) of top-k loss to O(n), where n is the number of background pixels. We evaluated the proposed bin loss over two public datasets for hard exudates segmentation task, including e-ophtha EX and IDRiD. Furthermore, three popular models for image segmentation, HED, DeepLab v2, and FCRN, were used to evaluate the versatility of bin loss. Extensive experiments show that each model with the proposed bin loss performs better than that with CBCE loss, which demonstrates bin loss is versatile so that it can be applied to different models for performance improvement. Specially, for DeepLab over e-ophtha EX, the F-score increases 5.2 percentage points, and the area under the SE-PPV curve (AUC) increases 10.6 percentage points. Moreover, the AUC increases more than 4 percentage points over IDRiD dataset for both DeepLab and FCRN. The source code of bin loss is available at: https://github.com/guomugong/bin_loss. (C) 2019 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available