Format

Send to

Choose Destination
J Am Med Inform Assoc. 2018 Apr 1;25(4):401-407. doi: 10.1093/jamia/ocx083.

Online physician ratings fail to predict actual performance on measures of quality, value, and peer review.

Author information

1
Division of Urology, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
2
Cedars-Sinai Center for Outcomes Research and Education (CS-CORE), Cedars-Sinai Medical Center, Los Angeles, CA, USA.
3
Department of Medicine, Division of Health Services Research, Cedars-Sinai Health System, Los Angeles, CA, USA.
4
Resource and Outcomes Management Department, Cedars-Sinai Health System, Los Angeles, CA, USA.
5
Department of Health Policy and Management, UCLA Fielding School of Public Health, Los Angeles, CA, USA.

Abstract

Objective:

Patients use online consumer ratings to identify high-performing physicians, but it is unclear if ratings are valid measures of clinical performance. We sought to determine whether online ratings of specialist physicians from 5 platforms predict quality of care, value of care, and peer-assessed physician performance.

Materials and Methods:

We conducted an observational study of 78 physicians representing 8 medical and surgical specialties. We assessed the association of consumer ratings with specialty-specific performance scores (metrics including adherence to Choosing Wisely measures, 30-day readmissions, length of stay, and adjusted cost of care), primary care physician peer-review scores, and administrator peer-review scores.

Results:

Across ratings platforms, multivariable models showed no significant association between mean consumer ratings and specialty-specific performance scores (β-coefficient range, -0.04, 0.04), primary care physician scores (β-coefficient range, -0.01, 0.3), and administrator scores (β-coefficient range, -0.2, 0.1). There was no association between ratings and score subdomains addressing quality or value-based care. Among physicians in the lowest quartile of specialty-specific performance scores, only 5%-32% had consumer ratings in the lowest quartile across platforms. Ratings were consistent across platforms; a physician's score on one platform significantly predicted his/her score on another in 5 of 10 comparisons.

Discussion:

Online ratings of specialist physicians do not predict objective measures of quality of care or peer assessment of clinical performance. Scores are consistent across platforms, suggesting that they jointly measure a latent construct that is unrelated to performance.

Conclusion:

Online consumer ratings should not be used in isolation to select physicians, given their poor association with clinical performance.

PMID:
29025145
DOI:
10.1093/jamia/ocx083

Supplemental Content

Full text links

Icon for Silverchair Information Systems
Loading ...
Support Center