Format

Send to

Choose Destination
Neuroimage. 2018 Jul 1;174:550-562. doi: 10.1016/j.neuroimage.2018.03.045. Epub 2018 Mar 20.

3D conditional generative adversarial networks for high-quality PET image estimation at low dose.

Author information

1
School of Computer Science, Sichuan University, China.
2
School of Computing and Information Technology, University of Wollongong, Australia.
3
School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, China.
4
Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, NC, USA.
5
Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA.
6
School of Computer Science, Chengdu University of Information Technology, China.
7
School of Computer Science, Sichuan University, China; School of Computer Science, Chengdu University of Information Technology, China.
8
Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, South Korea. Electronic address: dinggang_shen@med.unc.edu.
9
School of Electrical and Information Engineering, University of Sydney, Australia; School of Computing and Information Technology, University of Wollongong, Australia. Electronic address: luping.zhou.jane@googlemail.com.

Abstract

Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high-quality PET images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET images, which impacts the image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET images, we propose a novel method based on 3D conditional generative adversarial networks (3D c-GANs) to estimate the high-quality full-dose PET images from low-dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c-GANs, we condition the model on an input low-dose PET image and generate a corresponding output full-dose PET image. Specifically, to render the same underlying information between the low-dose and full-dose PET images, a 3D U-net-like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full-dose image. In order to guarantee the synthesized PET image to be close to the real one, we take into account of the estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c-GANs based progressive refinement scheme is also proposed to further improve the quality of estimated images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c-GANs method outperforms the benchmark methods and achieves much better performance than the state-of-the-art methods in both qualitative and quantitative measures.

KEYWORDS:

3D conditional GANs (3D c-GANs); Generative adversarial networks (GANs); Image estimation; Low-dose PET; Positron emission tomography (PET)

[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center