4.6 Article

Multi-scale GCN-assisted two-stage network for joint segmentation of retinal layers and discs in peripapillary OCT images

期刊

BIOMEDICAL OPTICS EXPRESS
卷 12, 期 4, 页码 2204-2220

出版社

Optica Publishing Group
DOI: 10.1364/BOE.417212

关键词

-

资金

  1. National Key Research and Development Program of China [2019YFB2203104]
  2. National Natural Science Q1 Foundation of China [61905141, 62035016]
  3. Shanghai Sailing Program [19YF1439700]
  4. Shanghai Shen Kang Hospital Development Center [SHDC2020CR30538]
  5. Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases [19DZ2250100]
  6. Science and Technology Commission of Shanghai Municipality [20DZ1100200]
  7. Shanghai Public Health System Three-Year Plan-Key Subjects [GWV10.1-XK7]
  8. China Scholarship Council [201506230096]

向作者/读者索取更多资源

The research team developed a novel two-stage framework assisted by a graph convolutional network for labeling the nine retinal layers and optic disc simultaneously, with promising results in accuracy metrics compared to other techniques.
An accurate and automated tissue segmentation algorithm for retinal optical coherence tomography (OCT) images is crucial for the diagnosis of glaucoma. However, due to the presence of the optic disc, the anatomical structure of the peripapillary region of the retina is complicated and is challenging for segmentation. To address this issue, we develop a novel graph convolutional network (GCN)-assisted two-stage framework to simultaneously label the nine retinal layers and the optic disc. Specifically, a multi-scale global reasoning module is inserted between the encoder and decoder of a U-shape neural network to exploit anatomical prior knowledge and perform spatial reasoning. We conduct experiments on human peripapillary retinal OCT images. We also provide public access to the collected dataset, which might contribute to the research in the field of biomedical image processing. The Dice score of the proposed segmentation network is 0.820 ? 0.001 and the pixel accuracy is 0.830 ? 0.002, both of which outperform those from other state-of-the-art techniques.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据