Format

Send to

Choose Destination
Int J Comput Assist Radiol Surg. 2019 Mar;14(3):417-425. doi: 10.1007/s11548-018-1875-7. Epub 2018 Oct 31.

Learning deep similarity metric for 3D MR-TRUS image registration.

Author information

1
Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA.
2
Philips Research North America, Cambridge, MA, 02141, USA.
3
Center for Interventional Oncology, Radiology & Imaging Sciences, National Institutes of Health, Bethesda, MD, 20892, USA.
4
Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA. yanp2@rpi.edu.

Abstract

PURPOSE:

The fusion of transrectal ultrasound (TRUS) and magnetic resonance (MR) images for guiding targeted prostate biopsy has significantly improved the biopsy yield of aggressive cancers. A key component of MR-TRUS fusion is image registration. However, it is very challenging to obtain a robust automatic MR-TRUS registration due to the large appearance difference between the two imaging modalities. The work presented in this paper aims to tackle this problem by addressing two challenges: (i) the definition of a suitable similarity metric and (ii) the determination of a suitable optimization strategy.

METHODS:

This work proposes the use of a deep convolutional neural network to learn a similarity metric for MR-TRUS registration. We also use a composite optimization strategy that explores the solution space in order to search for a suitable initialization for the second-order optimization of the learned metric. Further, a multi-pass approach is used in order to smooth the metric for optimization.

RESULTS:

The learned similarity metric outperforms the classical mutual information and also the state-of-the-art MIND feature-based methods. The results indicate that the overall registration framework has a large capture range. The proposed deep similarity metric-based approach obtained a mean TRE of 3.86 mm (with an initial TRE of 16 mm) for this challenging problem.

CONCLUSION:

A similarity metric that is learned using a deep neural network can be used to assess the quality of any given image registration and can be used in conjunction with the aforementioned optimization framework to perform automatic registration that is robust to poor initialization.

KEYWORDS:

Convolutional neural networks; Image registration; Image-guided interventions; Multimodal image fusion; Prostate cancer

PMID:
30382457
DOI:
10.1007/s11548-018-1875-7
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Springer
Loading ...
Support Center