4.6 Article

A deep learning approach to predict visual field using optical coherence tomography

Journal

PLOS ONE
Volume 15, Issue 7, Pages -

Publisher

PUBLIC LIBRARY SCIENCE
DOI: 10.1371/journal.pone.0234902

Keywords

-

Funding

  1. Bio & Medical Technology Development Program of the National Research Foundation (NRF) - South Korean government (MSIT) [NRF-2018M3A9E8066254]
  2. National Research Foundation of Korea [2018M3A9E8066254] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

Ask authors/readers for more resources

We developed a deep learning architecture based on Inception V3 to predict visual field using optical coherence tomography (OCT) imaging and evaluated its performance. Two OCT images, macular ganglion cell-inner plexiform layer (mGCIPL) and peripapillary retinal nerve fibre layer (pRNFL) thicknesses, were acquired and combined. A convolutional neural network architecture was constructed to predict visual field using this combined OCT image. The root mean square error (RMSE) between the actual and predicted visual fields was calculated to evaluate the performance. Globally (the entire visual field area), the RMSE for all patients was 4.79 +/- 2.56 dB, with 3.27 dB and 5.27 dB for the normal and glaucoma groups, respectively. The RMSE of the macular region (4.40 dB) was higher than that of the peripheral region (4.29 dB) for all subjects. In normal subjects, the RMSE of the macular region (2.45 dB) was significantly lower than that of the peripheral region (3.11 dB), whereas in glaucoma subjects, the RMSE was higher (5.62 dB versus 5.03 dB, respectively). The deep learning method effectively predicted the visual field 24-2 using the combined OCT image. This method may help clinicians determine visual fields, particularly for patients who are unable to undergo a physical visual field exam.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available