Format

Send to

Choose Destination
J Biomed Inform. 2017 Dec;76:9-18. doi: 10.1016/j.jbi.2017.10.008. Epub 2017 Oct 24.

Beyond discrimination: A comparison of calibration methods and clinical usefulness of predictive models of readmission risk.

Author information

1
Department of Biomedical Informatics, Vanderbilt University Medical Center, United States; Department of Medicine, Vanderbilt University Medical Center, United States; Department of Psychiatry, Vanderbilt University Medical Center, United States. Electronic address: colin.walsh@vanderbilt.edu.
2
Department of Biomedical Informatics, Vanderbilt University Medical Center, United States.
3
Department of Biomedical Informatics, Columbia University, United States.

Abstract

BACKGROUND:

Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions.

OBJECTIVES:

To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses.

MATERIALS AND METHODS:

Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed.

RESULTS:

C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration Slopes and Intercepts. Clinical usefulness analyses provided optimal risk thresholds, which varied by reason for readmission, outcome prevalence, and calibration algorithm. Utility analyses also suggested maximum tolerable intervention costs, e.g., $1720 for all-cause readmissions based on a published cost of readmission of $11,862.

CONCLUSIONS:

Choice of calibration method depends on availability of validation data and on performance. Improperly calibrated models may contribute to higher costs of intervention as measured via clinical usefulness. Decision-makers must understand underlying utilities or costs inherent in the use-case at hand to assess usefulness and will obtain the optimal risk threshold to trigger intervention with intervention cost limits as a result.

KEYWORDS:

Calibration; Clinical usefulness; Predictive analytics; Readmissions; Utility analysis

PMID:
29079501
PMCID:
PMC5716927
DOI:
10.1016/j.jbi.2017.10.008
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for Elsevier Science Icon for PubMed Central
Loading ...
Support Center