Interrater reliability of measurements of comorbid illness should be reported

J Clin Epidemiol. 2006 Sep;59(9):926-33. doi: 10.1016/j.jclinepi.2006.02.006. Epub 2006 Jun 23.

Abstract

Objective: Comorbidity indices are commonly used to stratify patients to control for treatment selection bias. The objectives here were to review the reporting of interrater reliability when studies use comorbidity indices in clinical research publications and to report the interrater reliability of four common indices in a particular research setting.

Study design and setting: Four trained abstractors reviewed the same 40 charts of patients with squamous cell carcinoma of the head and neck from a regional cancer center. Scores for the Charlson Index, the Index of Co-existent Disease, the Cumulative Illness Rating Scale, and the Kaplan-Feinstein Classification were calculated, and the intraclass correlation coefficient was used to assess interrater reliability.

Results: The details on the training of abstractors and the results of interrater reliability tests are not commonly reported. In our study setting, the Charlson Index had excellent reliability and the others had acceptable reliability.

Conclusion: If the quality of a study using an index or scale is to be assessed, the reliability and interrater reliability of the score assignment process should be reported.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Carcinoma, Squamous Cell / complications
  • Comorbidity*
  • Data Interpretation, Statistical
  • Epidemiology / education*
  • Head and Neck Neoplasms / complications
  • Health Status Indicators
  • Humans
  • Observer Variation*
  • Selection Bias