Format

Send to

Choose Destination
J Am Med Inform Assoc. 2005 May-Jun;12(3):296-8. Epub 2005 Jan 31.

Agreement, the f-measure, and reliability in information retrieval.

Author information

1
Department of Medical Informatics, Columbia University, 622 West 168th Street, VC5, New York, NY 10032, USA. hripcsak@columbia.edu

Abstract

Information retrieval studies that involve searching the Internet or marking phrases usually lack a well-defined number of negative cases. This prevents the use of traditional interrater reliability metrics like the kappa statistic to assess the quality of expert-generated gold standards. Such studies often quantify system performance as precision, recall, and F-measure, or as agreement. It can be shown that the average F-measure among pairs of experts is numerically identical to the average positive specific agreement among experts and that kappa approaches these measures as the number of negative cases grows large. Positive specific agreement-or the equivalent F-measure-may be an appropriate way to quantify interrater reliability and therefore to assess the reliability of a gold standard in these studies.

PMID:
15684123
PMCID:
PMC1090460
DOI:
10.1197/jamia.M1733
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for Silverchair Information Systems Icon for PubMed Central
Loading ...
Support Center