Send to

Choose Destination
Med Phys. 2019 May;46(5):2169-2180. doi: 10.1002/mp.13466. Epub 2019 Mar 21.

Deep convolutional neural network for segmentation of thoracic organs-at-risk using cropped 3D images.

Author information

Department of Biomedical Engineering, University of Virginia, Charlottesville, VA, 22903, USA.
Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, VA, 22903, USA.
Department of Radiation Medicine, University of Kentucky, Lexington, KY, 40536, USA.



Automatic segmentation of organs-at-risk (OARs) is a key step in radiation treatment planning to reduce human efforts and bias. Deep convolutional neural networks (DCNN) have shown great success in many medical image segmentation applications but there are still challenges in dealing with large 3D images for optimal results. The purpose of this study is to develop a novel DCNN method for thoracic OARs segmentation using cropped 3D images.


To segment the five organs (left and right lungs, heart, esophagus and spinal cord) from the thoracic CT scans, preprocessing to unify the voxel spacing and intensity was first performed, a 3D U-Net was then trained on resampled thoracic images to localize each organ, then the original images were cropped to only contain one organ and served as the input to each individual organ segmentation network. The segmentation maps were then merged to get the final results. The network structures were optimized for each step, as well as the training and testing strategies. A novel testing augmentation with multiple iterations of image cropping was used. The networks were trained on 36 thoracic CT scans with expert annotations provided by the organizers of the 2017 AAPM Thoracic Auto-segmentation Challenge and tested on the challenge testing dataset as well as a private dataset.


The proposed method earned second place in the live phase of the challenge and first place in the subsequent ongoing phase using a newly developed testing augmentation approach. It showed superior-than-human performance on average in terms of Dice scores (spinal cord: 0.893 ± 0.044, right lung: 0.972 ± 0.021, left lung: 0.979 ± 0.008, heart: 0.925 ± 0.015, esophagus: 0.726 ± 0.094), mean surface distance (spinal cord: 0.662 ± 0.248 mm, right lung: 0.933 ± 0.574 mm, left lung: 0.586 ± 0.285 mm, heart: 2.297 ± 0.492 mm, esophagus: 2.341 ± 2.380 mm) and 95% Hausdorff distance (spinal cord: 1.893 ± 0.627 mm, right lung: 3.958 ± 2.845 mm, left lung: 2.103 ± 0.938 mm, heart: 6.570 ± 1.501 mm, esophagus: 8.714 ± 10.588 mm). It also achieved good performance in the private dataset and reduced the editing time to 7.5 min per patient following automatic segmentation.


The proposed DCNN method demonstrated good performance in automatic OAR segmentation from thoracic CT scans. It has the potential for eventual clinical adoption of deep learning in radiation treatment planning due to improved accuracy and reduced cost for OAR segmentation.


3D U-Net; automatic segmentation; convolutional neural network; deep learning; lung cancer


Supplemental Content

Full text links

Icon for Wiley
Loading ...
Support Center