Send to

Choose Destination
Med Phys. 2019 Feb 8. doi: 10.1002/mp.13438. [Epub ahead of print]

U-Net-based Deep-Learning Bladder Segmentation in CT Urography.

Author information

Department of Radiology, University of Michigan, Ann Arbor, MI, 48109, USA.
School of Data and Computer Science, Sun Yat-Sen University, Guangzhou, 510275, P.R. China.
Guangdong Province Key Laboratory Computational Science, Sun Yat-Sen University, Guangzhou, 510275, P.R. China.



To develop a U-Net based deep learning approach (U-DL) for bladder segmentation in computed tomography urography (CTU) as a part of a computer-assisted bladder cancer detection and treatment response assessment pipeline.


A dataset of 173 cases including 81 cases in training/validation set (42 masses, 21 with wall thickening, 18 normal bladders), and 92 cases in the test set (43 masses, 36 with wall thickening, 13 normal bladders) were used with Institutional Review Board (IRB) approval. An experienced radiologist provided 3D hand-outlines for all cases as the reference standard. We previously developed a bladder segmentation method that used a deep-learning convolution neural network and level sets (DCNN-LS) within a user-input bounding box. However, some cases with poor image quality or with advanced bladder cancer spreading into the neighboring organs caused inaccurate segmentation. We have newly developed an automated U-DL method to estimate a likelihood map of the bladder in CTU. The U-DL did not require a user-input box and the level sets for post-processing. To identify the best model for this task, we compared the following models: (1) 2D U-DL and 3D U-DL using 2D CT slices and 3D CT volumes, respectively, as input, (2) U-DLs using CT images of different resolutions as input, and (3) U-DLs with and without automated cropping of the bladder as an image preprocessing step. The segmentation accuracy relative to the reference standard was quantified by six measures: average volume intersection ratio (AVI), average percent volume error (AVE), average absolute volume error (AAVE), average minimum distance (AMD), average Hausdorff distance (AHD), and the average Jaccard index (AJI). As a baseline, the results from our previous DCNN-LS method were used.


In the test set, the best 2D U-DL model achieved AVI, AVE, AAVE, AMD, AHD, and AJI values of 93.4±9.5%, -4.2±14.2%, 9.2±11.5%, 2.7±2.5 mm, 9.7±7.6 mm, 85.0±11.3%, respectively, while the corresponding measures by the best 3D U-DL were 90.6±11.9%, -2.3±21.7%, 11.5±18.5%, 3.1±3.2 mm, 11.4±10.0 mm, and 82.6±14.2%, respectively. For comparison, the corresponding values obtained with the baseline method were 81.9±12.1%, 10.2±16.2%, 14.0±13.0%, 3.6±2.0 mm, 12.8±6.1 mm, and 76.2±11.8%, respectively, for the same test set. The improvement for all measures between the best U-DL and the DCNN-LS were statistically significant (p<0.001).


Compared to previous DCNN-LS method, which depended on a user-input bounding box, the U-DL provided more accurate bladder segmentation and was more automated than the previous approach. This article is protected by copyright. All rights reserved.


Bladder; CT Urography; Computer-Aided Detection; Deep-Learning; Segmentation


Supplemental Content

Full text links

Icon for Wiley
Loading ...
Support Center