Format

Send to

Choose Destination
Breast. 2005 Aug;14(4):269-75.

Categorizing breast mammographic density: intra- and interobserver reproducibility of BI-RADS density categories.

Author information

1
Centro per lo Studio e la Prevenzione Oncologica, Viale A. Volta 171, I-50131 Firenze, Italy; Screening and Test Evaluation Programme, School of Public Health, University of Sydney, Australia. s.ciatto@cspo.it

Abstract

The inter- and intraobserver agreement (kappa-statistic) in reporting according to Breast Imaging Reporting and Data System (BI-RADS((R))) breast density categories was tested in 12 dedicated breast radiologists reading a digitized set of 100 two-view mammograms. Average intraobserver agreement was substantial (kappa=0.71, range 0.32-0.88) on a four-grade scale (D1/D2/D3/D4) and almost perfect (kappa=0.81, range 0.62-1.00) on a two-grade scale (D1-2/D3-4). Average interobserver agreement was moderate (kappa=0.54, range 0.02-0.77) on a four-grade scale and substantial (kappa=0.71, range 0.31-0.88) on a two-grade scale. Major disagreement was found for intermediate categories (D2=0.25, D3=0.28). Categorization of breast density according to BI-RADS is feasible and consistency is good within readers and reasonable between readers. Interobserver inconsistency does occur, and checking the adoption of proper criteria through a proficiency test and appropriate training might be useful. As inconsistency is probably due to erroneous perception of classification criteria, standard sets of reference images should be made available for training.

PMID:
16085233
DOI:
10.1016/j.breast.2004.12.004
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center