Machine learning-based augmented reality for improved surgical scene understanding

Comput Med Imaging Graph. 2015 Apr:41:55-60. doi: 10.1016/j.compmedimag.2014.06.007. Epub 2014 Jun 19.

Abstract

In orthopedic and trauma surgery, AR technology can support surgeons in the challenging task of understanding the spatial relationships between the anatomy, the implants and their tools. In this context, we propose a novel augmented visualization of the surgical scene that mixes intelligently the different sources of information provided by a mobile C-arm combined with a Kinect RGB-Depth sensor. Therefore, we introduce a learning-based paradigm that aims at (1) identifying the relevant objects or anatomy in both Kinect and X-ray data, and (2) creating an object-specific pixel-wise alpha map that permits relevance-based fusion of the video and the X-ray images within one single view. In 12 simulated surgeries, we show very promising results aiming at providing for surgeons a better surgical scene understanding as well as an improved depth perception.

MeSH terms

  • Algorithms
  • Equipment Design
  • Equipment Failure Analysis
  • Humans
  • Image Enhancement / instrumentation
  • Image Enhancement / methods
  • Image Interpretation, Computer-Assisted / instrumentation
  • Image Interpretation, Computer-Assisted / methods*
  • Imaging, Three-Dimensional / instrumentation
  • Imaging, Three-Dimensional / methods
  • Machine Learning*
  • Multimodal Imaging / instrumentation
  • Multimodal Imaging / methods
  • Pattern Recognition, Automated / methods*
  • Reproducibility of Results
  • Sensitivity and Specificity
  • Surgery, Computer-Assisted / instrumentation
  • Surgery, Computer-Assisted / methods*
  • Tomography, X-Ray Computed / instrumentation
  • Tomography, X-Ray Computed / methods*
  • User-Computer Interface*
  • Video Recording / instrumentation
  • Video Recording / methods