Send to

Choose Destination
See comment in PubMed Commons below
J Dent Res. 1986 Feb;65(2):128-30.

Percent agreement, Pearson's correlation, and kappa as measures of inter-examiner reliability.


Percent agreement and Pearson's correlation coefficient are frequently used to represent inter-examiner reliability, but these measures can be misleading. The use of percent agreement to measure inter-examiner agreement should be discouraged, because it does not take into account the agreement due solely to chance. Caution must be used in the interpretation of Pearson's correlation, because it is unaffected by the presence of any systematic biases. Analyses of data from a reliability study show that even though percent agreement and kappa were consistently high among three examiners, the reliability measured by Pearson's correlation was inconsistent. This study shows that correlation and kappa can be used together to uncover non-random examiner error.

[PubMed - indexed for MEDLINE]
PubMed Commons home

PubMed Commons

How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for HighWire
    Loading ...
    Support Center