4.6 Article

RPS-Net: An effective retinal image projection segmentation network for retinal vessels and foveal avascular zone based on OCTA data

Journal

MEDICAL PHYSICS
Volume 49, Issue 6, Pages 3830-3844

Publisher

WILEY
DOI: 10.1002/mp.15608

Keywords

foveal avascular zone; optical coherence tomography angiography; projection learning; retinal vessels; segmentation

Funding

  1. National Key Research and Development Program of China [2019YFE0110800]
  2. National Natural Science Foundation of China [61972060, 62027827]
  3. Natural Science Foundation of Chongqing [cstc2020jcyj-zdxmX0025, cstc2019cxcyljrc-td0270, cstc2019jcyj-cxttX0002]

Ask authors/readers for more resources

This study proposes an effective retinal image projection segmentation network (RPS-Net) for accurate segmentation of retinal vessels and foveal avascular zone (FAZ). Experimental results on a large retinal dataset demonstrate that our network outperforms other existing methods.
Background Optical coherence tomography angiography (OCTA) is an advanced imaging technology that can present the three-dimensional (3D) structure of retinal vessels (RVs). Quantitative analysis of retinal vessel density and foveal avascular zone (FAZ) area is of great significance in clinical diagnosis, and the automatic semantic segmentation at the pixel level helps quantitative analysis. The existing segmentation methods cannot effectively use the volume data and projection map data of the OCTA image at the same time and lack the trade-off between global perception and local details, which lead to problems such as discontinuity of segmentation results and deviation of morphological estimation. Purpose In order to better assist physicians in clinical diagnosis and treatment, the segmentation accuracy of RVs and FAZ needs to be further improved. In this work, we propose an effective retinal image projection segmentation network (RPS-Net) to achieve accurate RVs and FAZ segmentation. Experiments show that this network exhibits good performance and outperforms other existing methods. Methods Our method considers three aspects. First, we use two parallel projection paths to learn global perceptual features and local supplementary details. Second, we use the dual-way projection learning module to reduce the depth of the 3D data and learn image spatial features. Finally, we merged the two-dimensional features learned from the volume data with the two-dimensional projection data, and used a U-shaped network to further learn and generate the final result. Results We validated our model on the OCTA-500, which is a large multi-modal, multi-task retinal dataset. The experimental results showed that our method achieved state-of-the-art performance; the mean Dice coefficients for RVs are 89.89 +/- 2.60 (%) and 91.40 +/- 9.18 (%) on the two subsets, while the Dice coefficients for FAZ are 91.55 +/- 2.05 (%) and 97.80 +/- 2.75 (%), respectively. Conclusions Our method can make full use of the information of 3D data and 2D data to generate segmented images with higher continuity and accuracy. Code is available at .

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available