Format

Send to

Choose Destination
J Clin Epidemiol. 2017 Nov;91:95-110. doi: 10.1016/j.jclinepi.2017.07.014. Epub 2017 Aug 24.

Cherry-picking by trialists and meta-analysts can drive conclusions about intervention efficacy.

Author information

1
Department of Epidemiology, Johns Hopkins University Bloomberg School of Public Health, 615 North Wolfe Street, Baltimore, MD 21205. Electronic address: evan.mayo-wilson@jhu.edu.
2
Department of Epidemiology, Johns Hopkins University Bloomberg School of Public Health, 615 North Wolfe Street, Baltimore, MD 21205.
3
Department of Surgery, Johns Hopkins University School of Medicine, 600 North Wolfe Street, Baltimore, MD.
4
The TMJ Association, Ltd., P.O. Box 26770, Milwaukee, WI 53226.
5
Pharmaceutical Health Services Research, University of Maryland School of Pharmacy, 20 North Pine Street, Baltimore, MD 21201.
6
Department of Neurology-Neuromuscular Medicine, Johns Hopkins University School of Medicine, 733 North Broadway, Baltimore, MD 21205.
7
Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, 5510 Nathan Shock Drive, Suite 100, Baltimore, MD 21224.
8
Department of Mental Health, Johns Hopkins University Bloomberg School of Public Health, 624 N Broadway, Baltimore, MD 21205.
9
Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, 624 North Broadway, Baltimore, MD 21205.
10
Department of Psychiatry and Behavioral Sciences, The Women's Mood Disorders Center, The Johns Hopkins Hospital, 550 North Broadway, Baltimore, MD 21205.
11
Welch Medical Library, Johns Hopkins University School of Medicine, 2024 Bldg. 1-213, Baltimore, MD 21287.
12
Johns Hopkins University Peabody Institute, 1 East Mount Vernon Place, Baltimore, MD 21202.
13
Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Malone Hall, 3400 N. Charles Street, Baltimore, MD 21218.

Abstract

OBJECTIVES:

The objective of this study was to determine whether disagreements among multiple data sources affect systematic reviews of randomized clinical trials (RCTs).

STUDY DESIGN AND SETTING:

Eligible RCTs examined gabapentin for neuropathic pain and quetiapine for bipolar depression, reported in public (e.g., journal articles) and nonpublic sources (clinical study reports [CSRs] and individual participant data [IPD]).

RESULTS:

We found 21 gabapentin RCTs (74 reports, 6 IPD) and 7 quetiapine RCTs (50 reports, 1 IPD); most were reported in journal articles (18/21 [86%] and 6/7 [86%], respectively). When available, CSRs contained the most trial design and risk of bias information. CSRs and IPD contained the most results. For the outcome domains "pain intensity" (gabapentin) and "depression" (quetiapine), we found single trials with 68 and 98 different meta-analyzable results, respectively; by purposefully selecting one meta-analyzable result for each RCT, we could change the overall result for pain intensity from effective (standardized mean difference [SMD] = -0.45; 95% confidence interval [CI]: -0.63 to -0.27) to ineffective (SMD = -0.06; 95% CI: -0.24 to 0.12). We could change the effect for depression from a medium effect (SMD = -0.55; 95% CI: -0.85 to -0.25) to a small effect (SMD = -0.26; 95% CI: -0.41 to -0.1).

CONCLUSIONS:

Disagreements across data sources affect the effect size, statistical significance, and interpretation of trials and meta-analyses.

KEYWORDS:

Clinical trials; Meta-analysis; Reporting bias; Risk of bias assessment; Selective outcome reporting; Systematic reviews

PMID:
28842290
DOI:
10.1016/j.jclinepi.2017.07.014
[Indexed for MEDLINE]
Free full text

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center