Syntactic and Semantic Errors in Radiology Reports Associated With Speech Recognition Software

Stud Health Technol Inform. 2015:216:922.

Abstract

Speech recognition software (SRS) has many benefits, but also increases the frequency of errors in radiology reports, which could impact patient care. As part of a quality control project, 13 trained medical transcriptionists proofread 213,977 SRS-generated signed reports from 147 different radiologists over a 40 month time interval. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods using .2 analysis and multiple logistic regression, as appropriate. 20,759 (9.7%) reports contained errors; 3,992 (1.9%) contained material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors (P<.001). Error proportion varied significantly among radiologists and between imaging subspecialties (P<.001). Errors were more common in cross-sectional reports (vs. plain radiography) (OR, 3.72), reports reinterpreting results of outside examinations (vs. in-house) (OR, 1.55), and procedural studies (vs. diagnostic) (OR, 1.91) (all P<.001). Dictation microphone upgrade did not affect error rate (P=.06). Error rate decreased over time (P<.001).

MeSH terms

  • Humans
  • Radiography* / methods
  • Radiography* / standards
  • Radiology Information Systems
  • Semantics
  • Speech Recognition Software* / standards