Format

Send to

Choose Destination
J Clin Epidemiol. 2018 Jun;98:133-143. doi: 10.1016/j.jclinepi.2017.11.013. Epub 2017 Nov 24.

Poor performance of clinical prediction models: the harm of commonly applied methods.

Author information

1
Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, The Netherlands; Department of Public Health, Erasmus MC, Rotterdam, The Netherlands. Electronic address: e.w.steyerberg@lumc.nl.
2
Division of Population Sciences, Dana-Farber Cancer Institute, 02215 MA, Boston, USA.
3
Department of Medicine, Stanford University School of Medicine, Stanford, CA, USA; Department of Health Research and Policy, Stanford University School of Medicine, Stanford, CA, USA; Department of Statistics, Stanford University School of Humanities and Sciences, Stanford, CA, USA; Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA.
4
Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, The Netherlands; Department of Development and Regeneration, KU Leuven, Leuven, Belgium.
5
Herbert Irving Comprehensive Cancer Center and Division of Digestive and Liver Diseases, Columbia University Medical Center, New York, NY, USA.

Abstract

OBJECTIVE:

To evaluate limitations of common statistical modeling approaches in deriving clinical prediction models and explore alternative strategies.

STUDY DESIGN AND SETTING:

A previously published model predicted the likelihood of having a mutation in germline DNA mismatch repair genes at the time of diagnosis of colorectal cancer. This model was based on a cohort where 38 mutations were found among 870 participants, with validation in an independent cohort with 35 mutations. The modeling strategy included stepwise selection of predictors from a pool of over 37 candidate predictors and dichotomization of continuous predictors. We simulated this strategy in small subsets of a large contemporary cohort (2,051 mutations among 19,866 participants) and made comparisons to other modeling approaches. All models were evaluated according to bias and discriminative ability (concordance index, c) in independent data.

RESULTS:

We found over 50% bias for five of six originally selected predictors, unstable model specification, and poor performance at validation (median c = 0.74). A small validation sample hampered stable assessment of performance. Model prespecification based on external knowledge and using continuous predictors led to better performance (c = 0.836 and c = 0.852 with 38 and 2,051 events respectively).

CONCLUSION:

Prediction models perform poorly if based on small numbers of events and developed with common but suboptimal statistical approaches. Alternative modeling strategies to best exploit available predictive information need wider implementation, with collaborative research to increase sample sizes.

KEYWORDS:

Events per variable; Prediction model; Regression analysis; Sample size; Simulation; Validation

PMID:
29174118
DOI:
10.1016/j.jclinepi.2017.11.013
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center