Format

Send to

Choose Destination
J Clin Epidemiol. 1997 Dec;50(12):1395-404.

Comprehensive reliability assessment and comparison of quality indicators and their components.

Author information

1
Health Care Quality Analysis, Amherst, NH 03031, USA.

Abstract

To test whether conventional data reliability assessment overestimates reliability, an assessment and a comparison of the reliability of complex quality indicators and their simpler components were conducted. Medical records of 1078 Medicare cases with principal diagnoses of initial episodes of acute myocardial infarction (AMI) were independently reabstracted at two national Clinical Data Abstraction Centers (CDACs). The inter-rater agreement beyond chance (kappa) of reabstracted and original quality indicators and key components were computed and compared. Results showed excellent agreement (kappas ranging from 0.88 to 0.95) for simple determinations of whether standard medical therapies were provided. Repeatability of eligibility status and the more complex determinations of whether "ideal" candidates were not treated showed moderate to excellent kappa values ranging from 0.41 to 0.79. A planned comparison of five similar quality indicators and their key components showed that the simpler treatment components, as a group, had significantly higher kappas than the more complexly derived eligibility components and composite indicators (Fisher's exact, p < 0.02). Reliability assessment of quality indicators should be based upon the repeatability of the whole indicator, accounting for both data and logic, and not just one simple element.

PMID:
9449943
DOI:
10.1016/s0895-4356(97)00218-7
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center