Format

Send to

Choose Destination
J Clin Epidemiol. 1996 Jul;49(7):775-82.

The meaning of kappa: probabilistic concepts of reliability and validity revisited.

Author information

1
Institute of Medical Statistics and Information Science, Freie Universit├Ąt Berlin, Germany.

Abstract

A framework--the "agreement concept"--is developed to study the use of Cohen's kappa as well as alternative measures of chance-corrected agreement in a unified manner. Focusing on intrarater consistency it is demonstrated that for 2 x 2 tables an adequate choice between different measures of chance-corrected agreement can be made only if the characteristics of the observational setting are taken into account. In particular, a naive use of Cohen's kappa may lead to strikingly overoptimistic estimates of chance-corrected agreement. Such bias can be overcome by more elaborate study designs that allow for an unrestricted estimation of the probabilities at issue. When Cohen's kappa is appropriately applied as a measure of chance-corrected agreement, its values prove to be a linear--and not a parabolic--function of true prevalence. It is further shown how the validity of ratings is influenced by lack of consistency. Depending on the design of a validity study, this may lead, on purely formal grounds, to prevalence-dependent estimates of sensitivity and specificity. Proposed formulas for "chance-corrected" validity indexes fail to adjust for this phenomenon.

PMID:
8691228
DOI:
10.1016/0895-4356(96)00011-x
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center