4.5 Article

Multi-Scale and Multi-Branch Convolutional Neural Network for Retinal Image Segmentation

Journal

SYMMETRY-BASEL
Volume 13, Issue 3, Pages -

Publisher

MDPI
DOI: 10.3390/sym13030365

Keywords

retinal image segmentation; convolutional neural network; deep learning

Funding

  1. National Natural Science Foundation of China [61163036, 61962054]
  2. 2016 Gansu Provincial Science and Technology Plan - Natural Science Foundation of China [1606RJZA047]
  3. 2012 Gansu Provincial University Fundamental Research Fund for Special Research Funds
  4. Northwest Normal University's Third Phase of Knowledge and Innovation Engineering Research Backbone Project [nwnu-kjcxgc-03-67]
  5. Gansu Province Postgraduate Supervisor Program in Colleges and Universities [1201-16]

Ask authors/readers for more resources

This study proposes a multi-scale and multi-branch convolutional neural network model (MSMB-Net) for retinal image segmentation, which captures global context information on domains of different sizes, integrates shallow and deep semantic information, and embeds an improved attention mechanism to improve segmentation accuracy. Experimental results demonstrate that our proposed method has good segmentation performance in all four benchmark tests compared to existing retinal image segmentation methods.
The accurate segmentation of retinal images is a basic step in screening for retinopathy and glaucoma. Most existing retinal image segmentation methods have insufficient feature information extraction. They are susceptible to the impact of the lesion area and poor image quality, resulting in the poor recovery of contextual information. This also causes the segmentation results of the model to be noisy and low in accuracy. Therefore, this paper proposes a multi-scale and multi-branch convolutional neural network model (multi-scale and multi-branch network (MSMB-Net)) for retinal image segmentation. The model uses atrous convolution with different expansion rates and skip connections to reduce the loss of feature information. Receiving domains of different sizes captures global context information. The model fully integrates shallow and deep semantic information and retains rich spatial information. The network embeds an improved attention mechanism to obtain more detailed information, which can improve the accuracy of segmentation. Finally, the method of this paper was validated on the fundus vascular datasets, DRIVE, STARE and CHASE datasets, with accuracies/F1 of 0.9708/0.8320, 0.9753/0.8469 and 0.9767/0.8190, respectively. The effectiveness of the method in this paper was further validated on the optic disc visual cup DRISHTI-GS1 dataset with an accuracy/F1 of 0.9985/0.9770. Experimental results show that, compared with existing retinal image segmentation methods, our proposed method has good segmentation performance in all four benchmark tests.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available