4.6 Article

Automatic fluid segmentation in retinal optical coherence tomography images using attention based deep learning

Journal

NEUROCOMPUTING
Volume 452, Issue -, Pages 576-591

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2020.07.143

Keywords

Optical coherence tomography; Fluid region segmentation; Deep learning; Medical image segmentation; U-Net; Attention mechanism

Funding

  1. National Natural Science Foundation of China [61403287, 61472293, 61572381]
  2. Natural Science Foundation of Hubei Province [2014CFB288]

Ask authors/readers for more resources

An improved U-Net segmentation method with attention mechanism and dense skip connections is proposed in this paper, which makes the segmentation results more precise and avoids excessive calculation. The method includes a joint loss function to address the problem of merging multiple fluid regions into one.
Optical coherence tomography (OCT) is one of the most commonly used ophthalmic diagnostic tech-niques. Macular Edema (ME) is the swelling of the macular region in the eye. Segmentation of the fluid region in the retinal layer is an important step in detecting lesions. However, manual segmentation is often a time consuming and subjective process. In this paper, an improved U-Net segmentation method is proposed. In this method, the attention mechanism is introduced to automatically locate the fluid region, which avoids the problem of excessive calculation in multi-stage methods. At the same time, the use of dense skip connections which combines high-level and low-level features makes the segmen-tation results more precise. The loss function is a joint loss, including weighted binary cross entropy loss, dice loss, and regression loss, where regression loss is used to avoid the problem of merging multiple fluid regions into one. The experimental results show that the proposed method can adapt to the OCT scans acquired by various imaging scanning devices, and this method is more effective than other start-of -the-art fluid segmentation methods. (c) 2020 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available