Send to

Choose Destination
Int J Comput Assist Radiol Surg. 2019 Feb;14(2):227-235. doi: 10.1007/s11548-018-1886-4. Epub 2018 Nov 27.

Deep-learned placental vessel segmentation for intraoperative video enhancement in fetoscopic surgery.

Author information

Yale University School of Medicine, New Haven, USA.
Yale University School of Medicine, New Haven, USA.
Department of Obstetrics and Gynecology, Yale University School of Medicine, New Haven, USA.
Yale Fetal Care Center, New Haven, USA.
Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, USA.
Department of Biomedical Engineering, Yale University School of Medicine, New Haven, USA.



Twin-to-twin transfusion syndrome (TTTS) is a potentially lethal condition that affects pregnancies in which twins share a single placenta. The definitive treatment for TTTS is fetoscopic laser photocoagulation, a procedure in which placental blood vessels are selectively cauterized. Challenges in this procedure include difficulty in quickly identifying placental blood vessels due to the many artifacts in the endoscopic video that the surgeon uses for navigation. We propose using deep-learned segmentations of blood vessels to create masks that can be recombined with the original fetoscopic video frame in such a way that the location of placental blood vessels is discernable at a glance.


In a process approved by an institutional review board, intraoperative videos were acquired from ten fetoscopic laser photocoagulation surgeries performed at Yale New Haven Hospital. A total of 345 video frames were selected from these videos at regularly spaced time intervals. The video frames were segmented once by an expert human rater (a clinician) and once by a novice, but trained human rater (an undergraduate student). The segmentations were used to train a fully convolutional neural network of 25 layers.


The neural network was able to produce segmentations with a high similarity to ground truth segmentations produced by an expert human rater (sensitivity = 92.15% ± 10.69%) and produced segmentations that were significantly more accurate than those produced by a novice human rater (sensitivity = 56.87% ± 21.64%; p < 0.01).


A convolutional neural network can be trained to segment placental blood vessels with near-human accuracy and can exceed the accuracy of novice human raters. Recombining these segmentations with the original fetoscopic video frames can produced enhanced frames in which blood vessels are easily detectable. This has significant implications for aiding fetoscopic surgeons-especially trainees who are not yet at an expert level.


Convolutional neural network; Deep learning; Fetoscopy; Segmentation; Twin-to-twin transfusion syndrome; Vessels

[Available on 2019-08-01]
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Springer
Loading ...
Support Center