Format

Send to

Choose Destination
J Digit Imaging. 2017 Feb;30(1):95-101. doi: 10.1007/s10278-016-9914-9.

High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.

Author information

1
Department of Medicine, Division of Hospital Medicine, University of California, San Francisco, 533 Parnassus Ave., Suite 127a, San Francisco, CA, 94143-0131, USA. Alvin.rajkomar@ucsf.edu.
2
Center for Digital Health Innovation, University of California, San Francisco, San Francisco, CA, USA. Alvin.rajkomar@ucsf.edu.
3
Center for Digital Health Innovation, University of California, San Francisco, San Francisco, CA, USA.
4
Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA.

Abstract

The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.

KEYWORDS:

Artificial neural networks; Chest radiographs; Computer vision; Convolutional neural network; Deep learning; Machine learning; Radiography

PMID:
27730417
PMCID:
PMC5267603
DOI:
10.1007/s10278-016-9914-9
[Indexed for MEDLINE]
Free PMC Article

Conflict of interest statement

Compliance with Ethical Standards Competing Interests Alvin Rajkomar reports having received fees as a research advisor from Google. Funding This research did not receive any specific grant from funding organizations.

Supplemental Content

Full text links

Icon for Springer Icon for PubMed Central
Loading ...
Support Center