Use of volunteer student abstractors for a retrospective cohort analysis: a study of inter-rater reliability

Am J Surg. 2013 May;205(5):552-6; discussion 556. doi: 10.1016/j.amjsurg.2013.01.021.

Abstract

Background: Little is known about the reliability of data collected by abstractors without professional medical training. This investigation sought to determine the level of agreement among untrained volunteer abstractors as part of a study to evaluate the risk assessment of venous thromboembolism in patients who have undergone trauma.

Methods: Forty-nine paper charts were chosen randomly from a volunteer-reviewed cohort of 2,339 and were compared with those of a single experienced abstractor. Inter-rater agreement was assessed using percent agreement, Cohen's kappa, and prevalence-adjusted bias-adjusted kappa (PABAK).

Results: Of the 71 data points, 28 had perfect agreement. The average agreement across all charts was 97%. Data with imperfect agreement had kappa values between .27 and .96 (mean, .75), with one additional value at zero even though it was associated with an agreement of 94%. PABAK values ranged from .67 to .98 (mean, .91), an average increase of .17 compared with kappa values.

Conclusions: The performance of volunteers showed outstanding inter-rater reliability; however, limitations of interpretation can influence reliability.

Publication types

  • Evaluation Study
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Abstracting and Indexing / statistics & numerical data*
  • Adolescent
  • Adult
  • Aged
  • Aged, 80 and over
  • Cohort Studies
  • Humans
  • Medical Records*
  • Middle Aged
  • Observer Variation
  • Retrospective Studies
  • Risk Assessment
  • Students, Medical*
  • Students, Nursing*
  • Venous Thromboembolism / etiology
  • Volunteers*
  • Wounds and Injuries / complications
  • Young Adult