Format

Send to

Choose Destination
See comment in PubMed Commons below
Fam Med. 2005 May;37(5):360-3.

Understanding interobserver agreement: the kappa statistic.

Author information

1
Robert Wood Johnson Clinical Scholars Program, University of North Carolina, USA. anthony_viera@med.unc.edu

Abstract

Items such as physical exam findings, radiographic interpretations, or other diagnostic tests often rely on some degree of subjective interpretation by observers. Studies that measure the agreement between two or more observers should include a statistic that takes into account the fact that observers will sometimes agree or disagree simply by chance. The kappa statistic (or kappa coefficient) is the most commonly used statistic for this purpose. A kappa of 1 indicates perfect agreement, whereas a kappa of 0 indicates agreement equivalent to chance. A limitation of kappa is that it is affected by the prevalence of the finding under observation. Methods to overcome this limitation have been described.

PMID:
15883903
[Indexed for MEDLINE]
Free full text
PubMed Commons home

PubMed Commons

0 comments
How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for Society of Teachers of Family Medicine
    Loading ...
    Support Center