Display Settings:

Format

Send to:

Choose Destination
Alzheimer Dis Assoc Disord. 2010 Jul-Sep;24(3):264-8. doi: 10.1097/WAD.0b013e3181d489c6.

Sample size requirements for training to a kappa agreement criterion on clinical dementia ratings.

Author information

  • 1Department of Neurology, Georgetown University School of Medicine, Washington, DC 20057, USA. ret7@georgetown.edu

Abstract

The Clinical Dementia Rating (CDR) is a valid and reliable global measure of dementia severity. Diagnosis and transition across stages hinge on its consistent administration. Reports of CDR ratings reliability have been based on 1 or 2 test cases at each severity level; agreement (kappa) statistics based on so few rated cases have large error, and confidence intervals are incorrect. Simulations varied the number of test cases, and their distribution across CDR stage; to derive the sample size yielding a 95% confidence that estimated is at least 0.60. We found that testing raters on 5 or more patients per CDR level (total N=25) will yield the desired confidence in estimated kappa, and if the test involves greater representation of CDR stages that are harder to evaluate, at least 42 ratings are needed. Testing newly trained raters with at least 5 patients per CDR stage will provide valid estimation of rater consistency, given the point estimate for kappa is roughly 0.80; fewer test cases increases the standard error and unequal distribution of test cases across CDR stages will lower kappa and increase error.

PMID:
20473138
[PubMed - indexed for MEDLINE]
PMCID:
PMC2924444
Free PMC Article

Images from this publication.See all images (1)Free text

Figure 1
PubMed Commons home

PubMed Commons

0 comments
How to join PubMed Commons

    Supplemental Content

    Icon for Lippincott Williams & Wilkins Icon for PubMed Central
    Loading ...
    Write to the Help Desk