Format

Send to

Choose Destination
Pharmacoepidemiol Drug Saf. 2016 Sep;25(9):973-81. doi: 10.1002/pds.4065. Epub 2016 Jul 14.

Design and analysis choices for safety surveillance evaluations need to be tuned to the specifics of the hypothesized drug-outcome association.

Author information

1
Reagan-Udall Foundation for the FDA, Innovation in Medical Evidence Development and Surveillance, Washington, DC, USA.
2
Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, USA.
3
Office of Biostatistics, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, MD, USA.
4
Cardiovascular Health Research Unit, Department of Epidemiology, University of Washington, Seattle, WA, USA.
5
Group Health Research Institute, Group Health Cooperative, Seattle, WA, USA.
6
Office of the Center Director, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, MD, USA.
7
Biostatistics Unit, Group Health Research Institute, Department of Biostatistics, University of Washington, Seattle, WA, USA.
8
Cardiovascular Health Research Unit, Departments of Medicine, Epidemiology, and Health Services, University of Washington, Seattle, WA, USA.
9
Office of Surveillance and Epidemiology, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, MD, USA.
10
RWE Systems, IMS Health, Burlington, MA, USA.
11
WHISCON, Newton, MA, USA.

Abstract

BACKGROUND:

We reviewed the results of the Observational Medical Outcomes Research Partnership (OMOP) 2010 Experiment in hopes of finding examples where apparently well-designed drug studies repeatedly produce anomalous findings. OMOP had applied thousands of designs and design parameters to 53 drug-outcome pairs across 10 electronic data resources. Our intent was to use this repository to elucidate some sources of error in observational studies.

METHOD:

From the 2010 OMOP Experiment, we sought drug-outcome-method combinations (DOMCs) that met consensus design criteria, yet repeatedly produced results contrary to expectation. We set aside DOMCs for which we could not agree on the suitability of the designs, then selected for an in-depth scrutiny one drug-outcome pair analyzed by a seemingly plausible methodological approach, whose results consistently disagreed with the a priori expectation.

RESULTS:

The OMOP "all-by-all" assessment of possible DOMCs yielded many combinations that would not be chosen by researchers as actual study options. Among those that passed a first level of scrutiny, two of seven drug-outcome pairs for which there were plausible research designs had anomalous results. The use of benzodiazepines was unexpectedly associated with acute renal failure and upper gastrointestinal bleeding. We chose the latter as an example for in-depth study. The factitious appearance of a bleeding risk may have been partly driven by an excess of procedures on the first day of treatment. A risk window definition that excluded the first day largely removed the spurious association.

CONCLUSION:

One cause of reproducible "error" may be repeated failure to tie design choices closely enough to the research question at hand. Copyright © 2016 John Wiley & Sons, Ltd.

KEYWORDS:

electronic health records; insurance claim data; medical product safety; monitoring; pharmacoepidemiology

PMID:
27418432
DOI:
10.1002/pds.4065
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Wiley
Loading ...
Support Center