4.6 Article

A Deep Learning Approach for Automated Detection of Geographic Atrophy from Color Fundus Photographs

Journal

OPHTHALMOLOGY
Volume 126, Issue 11, Pages 1533-1540

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ophtha.2019.06.005

Keywords

-

Categories

Funding

  1. National Center for Biotechnology Information/National Library of Medicine/National Institutes of Health, National Eye Institute/National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland [HHS-N-260-2005-00007-C, NO1-EY-5-0007]
  2. Office of Dietary Supplements, National Center for Complementary and Alternative Medicine
  3. National Institute on Aging
  4. National Heart, Lung, and Blood Institute
  5. National Institute of Neurological Disorders and Stroke
  6. NATIONAL EYE INSTITUTE [ZIAEY000554, ZIAEY000489] Funding Source: NIH RePORTER
  7. NATIONAL LIBRARY OF MEDICINE [ZIALM091813] Funding Source: NIH RePORTER

Ask authors/readers for more resources

Purpose: To assess the utility of deep learning in the detection of geographic atrophy (GA) from color fundus photographs and to explore potential utility in detecting central GA (CGA). Design: A deep learning model was developed to detect the presence of GA in color fundus photographs, and 2 additional models were developed to detect CGA in different scenarios. Participants: A total of 59 812 color fundus photographs from longitudinal follow-up of 4582 participants in the Age-Related Eye Disease Study (AREDS) dataset. Gold standard labels were from human expert reading center graders using a standardized protocol. Methods: A deep learning model was trained to use color fundus photographs to predict GA presence from a population of eyes with no AMD to advanced AMD. A second model was trained to predict CGA presence from the same population. A third model was trained to predict CGA presence from the subset of eyes with GA. For training and testing, 5-fold cross-validation was used. For comparison with human clinician performance, model performance was compared with that of 88 retinal specialists. Main Outcome Measures: Area under the curve (AUC), accuracy, sensitivity, specificity, and precision. Results: The deep learning models (GA detection, CGA detection from all eyes, and centrality detection from GA eyes) had AUCs of 0.933-0.976, 0.939-0.976, and 0.827-0.888, respectively. The GA detection model had accuracy, sensitivity, specificity, and precision of 0.965 (95% confidence interval [CI], 0.959-0.971), 0.692 (0.560-0.825), 0.978 (0.970-0.985), and 0.584 (0.491-0.676), respectively, compared with 0.975 (0.971-0.980), 0.588 (0.468-0.707), 0.982 (0.978-0.985), and 0.368 (0.230-0.505) for the retinal specialists. The CGA detection model had values of 0.966 (0.957-0.975), 0.763 (0.641-0.885), 0.971 (0.960-0.982), and 0.394 (0.341-0.448). The centrality detection model had values of 0.762 (0.725-0.799), 0.782 (0.618-0.945), 0.729 (0.543-0.916), and 0.799 (0.710-0.888). Conclusions: A deep learning model demonstrated high accuracy for the automated detection of GA. The AUC was noninferior to that of human retinal specialists. Deep learning approaches may also be applied to the identification of CGA. The code and pretrained models are publicly available at https://github.com/ncbi-nlp/DeepSeeNet. Published by Elsevier on behalf of the American Academy of Ophthalmology

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available