Format

Send to

Choose Destination
JAMA Netw Open. 2019 Oct 2;2(10):e1912869. doi: 10.1001/jamanetworkopen.2019.12869.

Feasibility of Using Real-World Data to Replicate Clinical Trial Evidence.

Author information

1
Yale School of Medicine, New Haven, Connecticut.
2
Department of Medicine, University of California, San Francisco School of Medicine, San Francisco.
3
Section of Cardiology, San Francisco Veterans Affairs Health Care System, San Francisco, California.
4
Division of Health Care Policy & Research, Mayo Clinic, Rochester, Minnesota.
5
Epidemiology Analytics, Janssen Research and Development, Titusville, New Jersey.
6
Observational Health Data Sciences and Informatics (OHDSI), Department of Biomedical Informatics, Columbia University Medical Center, New York, New York.
7
Section of General Internal Medicine and the National Clinician Scholars Program, Yale School of Medicine, New Haven, Connecticut.
8
Department of Health Policy and Management, Yale School of Public Health, New Haven, Connecticut.
9
Center for Outcomes Research and Evaluation, Yale-New Haven Hospital, New Haven, Connecticut.

Abstract

Importance:

Although randomized clinical trials are considered to be the criterion standard for generating clinical evidence, the use of real-world evidence to evaluate the efficacy and safety of medical interventions is gaining interest. Whether observational data can be used to address the same clinical questions being answered by traditional clinical trials is still unclear.

Objective:

To identify the number of clinical trials published in high-impact journals in 2017 that could be feasibly replicated using observational data from insurance claims and/or electronic health records (EHRs).

Design, Setting, and Participants:

In this cross-sectional analysis, PubMed was searched to identify all US-based clinical trials, regardless of randomization, published between January 1, 2017, and December 31, 2017, in the top 7 highest-impact general medical journals of 2017. Trials were excluded if they did not involve human participants, did not use end points that represented clinical outcomes among patients, were not characterized as clinical trials, and had no recruitment sites in the United States.

Main Outcomes and Measures:

The primary outcomes were the number and percentage of trials for which the intervention, indication, trial inclusion and exclusion criteria, and primary end points could be ascertained from insurance claims and/or EHR data.

Results:

Of the 220 US-based trials analyzed, 33 (15.0%) could be replicated using observational data because their intervention, indication, inclusion and exclusion criteria, and primary end points could be routinely ascertained from insurance claims and/or EHR data. Of the 220 trials, 86 (39.1%) had an intervention that could be ascertained from insurance claims and/or EHR data. Among the 86 trials, 62 (72.1%) had an indication that could be ascertained. Forty-five (72.6%) of 62 trials had at least 80% of inclusion and exclusion criteria data that could be ascertained. Of these 45 studies, 33 (73.3%) had at least 1 primary end point that could be ascertained.

Conclusions and Relevance:

This study found that only 15% of the US-based clinical trials published in high-impact journals in 2017 could be feasibly replicated through analysis of administrative claims or EHR data. This finding suggests the potential for real-world evidence to complement clinical trials, both by examining the concordance between randomized experiments and observational studies and by comparing the generalizability of the trial population with the real-world population of interest.

Supplemental Content

Full text links

Icon for Silverchair Information Systems Icon for PubMed Central
Loading ...
Support Center