Format

Send to

Choose Destination
Ophthalmology. 2019 Nov;126(11):1533-1540. doi: 10.1016/j.ophtha.2019.06.005. Epub 2019 Jun 11.

A Deep Learning Approach for Automated Detection of Geographic Atrophy from Color Fundus Photographs.

Author information

1
Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
2
Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland; National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland.
3
National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland.
4
Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland; Unit on Microglia, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
5
National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland. Electronic address: zhiyong.lu@nih.gov.
6
Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland. Electronic address: echew@nei.nih.gov.

Abstract

PURPOSE:

To assess the utility of deep learning in the detection of geographic atrophy (GA) from color fundus photographs and to explore potential utility in detecting central GA (CGA).

DESIGN:

A deep learning model was developed to detect the presence of GA in color fundus photographs, and 2 additional models were developed to detect CGA in different scenarios.

PARTICIPANTS:

A total of 59 812 color fundus photographs from longitudinal follow-up of 4582 participants in the Age-Related Eye Disease Study (AREDS) dataset. Gold standard labels were from human expert reading center graders using a standardized protocol.

METHODS:

A deep learning model was trained to use color fundus photographs to predict GA presence from a population of eyes with no AMD to advanced AMD. A second model was trained to predict CGA presence from the same population. A third model was trained to predict CGA presence from the subset of eyes with GA. For training and testing, 5-fold cross-validation was used. For comparison with human clinician performance, model performance was compared with that of 88 retinal specialists.

MAIN OUTCOME MEASURES:

Area under the curve (AUC), accuracy, sensitivity, specificity, and precision.

RESULTS:

The deep learning models (GA detection, CGA detection from all eyes, and centrality detection from GA eyes) had AUCs of 0.933-0.976, 0.939-0.976, and 0.827-0.888, respectively. The GA detection model had accuracy, sensitivity, specificity, and precision of 0.965 (95% confidence interval [CI], 0.959-0.971), 0.692 (0.560-0.825), 0.978 (0.970-0.985), and 0.584 (0.491-0.676), respectively, compared with 0.975 (0.971-0.980), 0.588 (0.468-0.707), 0.982 (0.978-0.985), and 0.368 (0.230-0.505) for the retinal specialists. The CGA detection model had values of 0.966 (0.957-0.975), 0.763 (0.641-0.885), 0.971 (0.960-0.982), and 0.394 (0.341-0.448). The centrality detection model had values of 0.762 (0.725-0.799), 0.782 (0.618-0.945), 0.729 (0.543-0.916), and 0.799 (0.710-0.888).

CONCLUSIONS:

A deep learning model demonstrated high accuracy for the automated detection of GA. The AUC was noninferior to that of human retinal specialists. Deep learning approaches may also be applied to the identification of CGA. The code and pretrained models are publicly available at https://github.com/ncbi-nlp/DeepSeeNet.

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center