Format

Send to

Choose Destination
Ophthalmology. 2018 Aug;125(8):1264-1272. doi: 10.1016/j.ophtha.2018.01.034. Epub 2018 Mar 13.

Grader Variability and the Importance of Reference Standards for Evaluating Machine Learning Models for Diabetic Retinopathy.

Author information

1
Google Research, Google Inc., Mountain View, California.
2
Department of Ophthalmology, Palo Alto Medical Foundation, Palo Alto, California.
3
Oregon Eye Consultants, Eugene, Oregon.
4
Google Research, Google Inc., Mountain View, California. Electronic address: lhpeng@google.com.

Abstract

PURPOSE:

Use adjudication to quantify errors in diabetic retinopathy (DR) grading based on individual graders and majority decision, and to train an improved automated algorithm for DR grading.

DESIGN:

Retrospective analysis.

PARTICIPANTS:

Retinal fundus images from DR screening programs.

METHODS:

Images were each graded by the algorithm, U.S. board-certified ophthalmologists, and retinal specialists. The adjudicated consensus of the retinal specialists served as the reference standard.

MAIN OUTCOME MEASURES:

For agreement between different graders as well as between the graders and the algorithm, we measured the (quadratic-weighted) kappa score. To compare the performance of different forms of manual grading and the algorithm for various DR severity cutoffs (e.g., mild or worse DR, moderate or worse DR), we measured area under the curve (AUC), sensitivity, and specificity.

RESULTS:

Of the 193 discrepancies between adjudication by retinal specialists and majority decision of ophthalmologists, the most common were missing microaneurysm (MAs) (36%), artifacts (20%), and misclassified hemorrhages (16%). Relative to the reference standard, the kappa for individual retinal specialists, ophthalmologists, and algorithm ranged from 0.82 to 0.91, 0.80 to 0.84, and 0.84, respectively. For moderate or worse DR, the majority decision of ophthalmologists had a sensitivity of 0.838 and specificity of 0.981. The algorithm had a sensitivity of 0.971, specificity of 0.923, and AUC of 0.986. For mild or worse DR, the algorithm had a sensitivity of 0.970, specificity of 0.917, and AUC of 0.986. By using a small number of adjudicated consensus grades as a tuning dataset and higher-resolution images as input, the algorithm improved in AUC from 0.934 to 0.986 for moderate or worse DR.

CONCLUSIONS:

Adjudication reduces the errors in DR grading. A small set of adjudicated DR grades allows substantial improvements in algorithm performance. The resulting algorithm's performance was on par with that of individual U.S. Board-Certified ophthalmologists and retinal specialists.

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center