Format

Send to

Choose Destination
Am J Obstet Gynecol. 2016 Nov;215(5):644.e1-644.e7. doi: 10.1016/j.ajog.2016.06.033. Epub 2016 Jun 27.

Crowdsourcing: a valid alternative to expert evaluation of robotic surgery skills.

Author information

1
Division of Urogynecology, Department of Obstetrics and Gynecology, Duke University, Durham, NC. Electronic address: michael.polin@duke.edu.
2
Division of Urogynecology, Department of Obstetrics and Gynecology, Duke University, Durham, NC.
3
Department of Biostatistics, University of Washington, Seattle, WA.
4
Department of Obstetrics and Gynecology, Lehigh Valley Health Network, Allentown, PA.
5
Division of Pediatric Urology, Department of Urology, University of Washington, Seattle, WA.
6
Division of Gynecologic Oncology, Department of Obstetrics and Gynecology, Lehigh Valley Health Network, Allentown, PA; Department of Obstetrics and Gynecology, Lehigh Valley Health Network, Allentown, PA.

Abstract

BACKGROUND:

Robotic-assisted gynecologic surgery is common, but requires unique training. A validated assessment tool for evaluating trainees' robotic surgery skills is Robotic-Objective Structured Assessments of Technical Skills.

OBJECTIVE:

We sought to assess whether crowdsourcing can be used as an alternative to expert surgical evaluators in scoring Robotic-Objective Structured Assessments of Technical Skills.

STUDY DESIGN:

The Robotic Training Network produced the Robotic-Objective Structured Assessments of Technical Skills, which evaluate trainees across 5 dry lab robotic surgical drills. Robotic-Objective Structured Assessments of Technical Skills were previously validated in a study of 105 participants, where dry lab surgical drills were recorded, de-identified, and scored by 3 expert surgeons using the Robotic-Objective Structured Assessments of Technical Skills checklist. Our methods-comparison study uses these previously obtained recordings and expert surgeon scores. Mean scores per participant from each drill were separated into quartiles. Crowdworkers were trained and calibrated on Robotic-Objective Structured Assessments of Technical Skills scoring using a representative recording of a skilled and novice surgeon. Following this, 3 recordings from each scoring quartile for each drill were randomly selected. Crowdworkers evaluated the randomly selected recordings using Robotic-Objective Structured Assessments of Technical Skills. Linear mixed effects models were used to derive mean crowdsourced ratings for each drill. Pearson correlation coefficients were calculated to assess the correlation between crowdsourced and expert surgeons' ratings.

RESULTS:

In all, 448 crowdworkers reviewed videos from 60 dry lab drills, and completed a total of 2517 Robotic-Objective Structured Assessments of Technical Skills assessments within 16 hours. Crowdsourced Robotic-Objective Structured Assessments of Technical Skills ratings were highly correlated with expert surgeon ratings across each of the 5 dry lab drills (r ranging from 0.75-0.91).

CONCLUSION:

Crowdsourced assessments of recorded dry lab surgical drills using a validated assessment tool are a rapid and suitable alternative to expert surgeon evaluation.

KEYWORDS:

crowdsourcing; robotic surgery; simulation; surgical training

PMID:
27365004
DOI:
10.1016/j.ajog.2016.06.033
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center