Format

Send to

Choose Destination
Med Image Anal. 2020 Feb;60:101621. doi: 10.1016/j.media.2019.101621. Epub 2019 Nov 23.

Context-guided fully convolutional networks for joint craniomaxillofacial bone segmentation and landmark digitization.

Author information

1
Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA. Electronic address: xdzhangjun@gmail.com.
2
Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA. Electronic address: mingxia_liu@med.unc.edu.
3
Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA. Electronic address: li_wang@med.unc.edu.
4
Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100191, China.
5
Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA.
6
Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA. Electronic address: jxia@houstonmethodist.org.
7
Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea. Electronic address: dinggang_shen@med.unc.edu.

Abstract

Cone-beam computed tomography (CBCT) scans are commonly used in diagnosing and planning surgical or orthodontic treatment to correct craniomaxillofacial (CMF) deformities. Based on CBCT images, it is clinically essential to generate an accurate 3D model of CMF structures (e.g., midface, and mandible) and digitize anatomical landmarks. This process often involves two tasks, i.e., bone segmentation and anatomical landmark digitization. Because landmarks usually lie on the boundaries of segmented bone regions, the tasks of bone segmentation and landmark digitization could be highly associated. Also, the spatial context information (e.g., displacements from voxels to landmarks) in CBCT images is intuitively important for accurately indicating the spatial association between voxels and landmarks. However, most of the existing studies simply treat bone segmentation and landmark digitization as two standalone tasks without considering their inherent relationship, and rarely take advantage of the spatial context information contained in CBCT images. To address these issues, we propose a Joint bone Segmentation and landmark Digitization (JSD) framework via context-guided fully convolutional networks (FCNs). Specifically, we first utilize displacement maps to model the spatial context information in CBCT images, where each element in the displacement map denotes the displacement from a voxel to a particular landmark. An FCN is learned to construct the mapping from the input image to its corresponding displacement maps. Using the learned displacement maps as guidance, we further develop a multi-task FCN model to perform bone segmentation and landmark digitization jointly. We validate the proposed JSD method on 107 subjects, and the experimental results demonstrate that our method is superior to the state-of-the-art approaches in both tasks of bone segmentation and landmark digitization.

KEYWORDS:

Bone segmentation; Cone-beam computed tomography; Fully convolutional networks; Landmark digitization

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center