4.5 Article

Multi-Scale and Multi-Branch Convolutional Neural Network for Retinal Image Segmentation

期刊

SYMMETRY-BASEL
卷 13, 期 3, 页码 -

出版社

MDPI
DOI: 10.3390/sym13030365

关键词

retinal image segmentation; convolutional neural network; deep learning

资金

  1. National Natural Science Foundation of China [61163036, 61962054]
  2. 2016 Gansu Provincial Science and Technology Plan - Natural Science Foundation of China [1606RJZA047]
  3. 2012 Gansu Provincial University Fundamental Research Fund for Special Research Funds
  4. Northwest Normal University's Third Phase of Knowledge and Innovation Engineering Research Backbone Project [nwnu-kjcxgc-03-67]
  5. Gansu Province Postgraduate Supervisor Program in Colleges and Universities [1201-16]

向作者/读者索取更多资源

This study proposes a multi-scale and multi-branch convolutional neural network model (MSMB-Net) for retinal image segmentation, which captures global context information on domains of different sizes, integrates shallow and deep semantic information, and embeds an improved attention mechanism to improve segmentation accuracy. Experimental results demonstrate that our proposed method has good segmentation performance in all four benchmark tests compared to existing retinal image segmentation methods.
The accurate segmentation of retinal images is a basic step in screening for retinopathy and glaucoma. Most existing retinal image segmentation methods have insufficient feature information extraction. They are susceptible to the impact of the lesion area and poor image quality, resulting in the poor recovery of contextual information. This also causes the segmentation results of the model to be noisy and low in accuracy. Therefore, this paper proposes a multi-scale and multi-branch convolutional neural network model (multi-scale and multi-branch network (MSMB-Net)) for retinal image segmentation. The model uses atrous convolution with different expansion rates and skip connections to reduce the loss of feature information. Receiving domains of different sizes captures global context information. The model fully integrates shallow and deep semantic information and retains rich spatial information. The network embeds an improved attention mechanism to obtain more detailed information, which can improve the accuracy of segmentation. Finally, the method of this paper was validated on the fundus vascular datasets, DRIVE, STARE and CHASE datasets, with accuracies/F1 of 0.9708/0.8320, 0.9753/0.8469 and 0.9767/0.8190, respectively. The effectiveness of the method in this paper was further validated on the optic disc visual cup DRISHTI-GS1 dataset with an accuracy/F1 of 0.9985/0.9770. Experimental results show that, compared with existing retinal image segmentation methods, our proposed method has good segmentation performance in all four benchmark tests.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据