• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Pharmacoepidemiol Drug Saf. Author manuscript; available in PMC Aug 6, 2010.
Published in final edited form as:
PMCID: PMC2917262
NIHMSID: NIHMS202990

A basic study design for expedited safety signal evaluation based on electronic healthcare data

SUMMARY

Active drug safety monitoring based on longitudinal electronic healthcare databases (a Sentinel System), as outlined in recent FDA-commissioned reports, consists of several interlocked processes, including signal generation, signal strengthening, and signal evaluation. Once a signal of a potential drug safety issue is generated, signal strengthening and signal evaluation have to follow in short sequence in order to quickly provide as much information about the triggering drug-event association as possible.

This paper proposes a basic study design based on the incident user cohort design for expedited signal evaluation in longitudinal healthcare databases. It will not resolve all methodological issues nor will it fit all study questions arising within the framework of a Sentinel System. It should rather be seen as a guidance that will fit the majority of situations and serve as a starting point for adaptations to specific studies.

Such an approach will expedite and structure the process of study development and highlight specific assumptions, which is particularly valuable in a Sentinel System where signals are by definition preliminary and evaluation of signals is time critical.

Keywords: healthcare databases, cohort study, incident user design, propensity scores, pharmacoepidemiology, Sentinel System

INTRODUCTION

Active drug safety monitoring based on longitudinal electronic healthcare databases (a Sentinel System), as outlined in recent FDA-commissioned reports,1 consists of several interlocked processes, including signal generation, signal strengthening, and signal evaluation. Once a signal of a potential drug safety issue is generated, signal strengthening and signal refutation/confirmation must follow in short sequence, even in parallel, and provide as much information about the triggering drug-event association as possible. At this stage, speed and high accuracy of analysis are of the essence. Information on true drug safety signals should not be withheld from physicians and patients, but false positive signals may cause substantial harm if they limit access to safe medications.2 This paper will focus on the last step in a Sentinel System: the fast implementation of pharmacoepidemiologic investigations to refute (or fail to refute) a safety signal. The paper focuses on design elements that may shorten the time necessary to design and implement a specific study.

Elaborations are based on the suggestion that the majority of drug safety signals generated by a Sentinel System can be investigated with a default cohort design that may be tailored to the drug-event pair of interest. This paper will not dictate one design, but rather will suggest a robust study design as a starting point for fast adaptation and implementation of an in-depth epidemiologic evaluation. Adaptations of this design to specific study needs are encouraged and will often make transparent the trade-offs between high validity and expeditious decision-making.

This paper proposes such a basic study design by combining well-known design elements and analytic strategies. It also provides a flowchart for implementing and adapting the design and discusses advantages and limitations as compared to alternative designs and analyses.

SIGNAL EVALUATION IN A SENTINEL SYSTEM

The following assumptions will be made to be less dependent on any specific implementation of a drug safety Sentinel System:

  1. A pair of a binary drug exposure and a binary adverse event has triggered a signal for a potential drug safety issue through some Sentinel System process.3
  2. The signal was raised by comparing the study drug to an active substance with similar indications, rather than a non-user or usual care group.
  3. We have a longitudinal health care database available that is similar to the one that gave rise to the signal, but its observations are independent. This assumption often may not be necessary, as outlined by Walker,4 but a discussion of this issue is not the focus of this paper.
  4. The longitudinal healthcare data available for the analysis are largely claims data from commercial or public health insurers with complete recording of healthcare encounters and prescription dispensing. Such claims data might be supplemented by information on functional and cognitive status, over-the-counter drug use,5 or laboratory test results.6

AN INCIDENT USER COHORT DESIGN

Consider a basic cohort design comparing new users of one treatment to new users of a comparison treatment for the same or similar indication. Further, consider that covariate information will be assessed in the longitudinal health care claims stream during the 6 months preceding treatment initiation. Follow-up starts the day after treatment initiation (Figure 1).

Figure 1
A basic incident user cohort design in longitudinal health care databases. A fixed-length covariate assessment period precedes the initiation of exposure and serves as a washout period of any earlier exposures. Follow-up starts after exposure status is ...

For several reasons, such an incident user cohort study is a broadly applicable design that is fairly robust against investigator error.

Sources of exposure variation

Consideration about the sources for exposure variation is a fundamental decision point in design choice. In a causal experiment, one would expose a patient to an agent and observe the agent's effect on his or her health, then rewind time, leave the patient unexposed, and keep all other factors constant to establish a counterfactual experience.7 Since this experiment is impossible, the next logical expansion of the experiment is to generate or observe exposure variation within the same patient but over time. If we observe time-varying drug use that has a short washout period, and the adverse event of interest has a rapid onset, then we can use the case-crossover design (Figure 2).8 An advantage of the case-crossover design is that time-invariant patient characteristics are implicitly controlled. In pharmacoepidemiology, however, treatment choice might change with changes in health status over time and thus introduce within-patient confounding. This may explain why we see few applications of the case-crossover design in drug safety research.9 For most safety studies, we will utilize variation in exposure between individual patients, and we will therefore apply a cohort study design.

Figure 2
Study design choice by source of exposure variation. The level of drug exposure variation determines the study design choices available for assessing an exposure-event association

Incident user design

There are several advantages to identifying patients who start a new drug and begin follow-up after initiation—similar to a parallel group randomized controlled trial which establishes an inception cohort.10 As medications have been started in patients of both the study group and the comparison group, they have been equally evaluated by physicians who concluded that they might benefit from the newly prescribed drug. This makes the treatment groups similar in characteristics that might not be observable in the study database.11 The clear temporal sequence of confounder adjustment before treatment initiation in an incident user design avoids mistakenly adjusting for consequences of treatment (intermediates) rather than predictors for treatment, a possible reason for over-adjustment.12 Identifying two active treatment groups further reduces the chances of immortal time bias, a mistake that most frequently emerges when defining a ‘non-user’ comparison group in healthcare databases.13 Because of the well-defined starting point of inception cohorts, it is possible to assess whether and in what form hazards vary over time by stratifying on duration of treatment (Figure 3). This is particularly useful when studying newly marketed medications: the incident user design avoids comparing populations predominantly composed of first-time users of a newly marketed drug with a population predominantly composed of prevalent users of the old drug (Figure 4). Such a comparison may be biased because patients who stay on treatment for a longer time may be less susceptible to the event of interest.14

Figure 3
Time-varying hazards: the risk of an adverse outcome increases or decreases as a function of time since first use. The shape of a time-varying hazard function depends on the type of event and is caused by a combination of the underlying biology and changes ...
Figure 4
Newly marketed medications and the advantage of the new user design. Medications that were marketed a while ago have reached equilibrium with many prevalent users and few new users while newly marketed drugs by definition will have a high proportion of ...

A common criticism of the incident user design is that excluding prevalent users will reduce the study size, in some cases substantially. While this is true, researchers should be aware that if they decide against an incident user design, they may gain precision at the cost of validity. Screening and identifying incident users in secondary databases, however, requires only a bit more computing time.

In some incident user designs, particularly studies of second-line treatments in chronic conditions, we can only study patients who switch from one drug to another, as very few patients will be treatment naive. Such switching is often not random, but rather is determined by progressing disease and treatment failure or by side effects that may be related to the study outcome; thus, users are not really incident users. However, a fair treatment comparison can be achieved by comparing new switchers to the study drug with new switchers to a comparison drug (Figure 5b). In the study of Diseasemodifying anti-rheumatic drugs (DMARD) safety in patients with Rheumatoid arthritis (RA), a common first-line DMARD is methotrexate (MTX). Among all MTX users, it is, therefore, appropriate to compare switchers to one biologic agent with switchers to another biologic agent.15 In both cases, physicians decided that treatment should be changed, which makes the comparison groups similar and preserves the main advantages of the incident user design. If the comparison of interest is MTX versus biologics, then another common first-line medication like chloroquine could be used as the baseline medication, from which patients switch either to MTX or to a biologic. Analogously, stepping-up therapy can be studied by comparing the addition of two different agents to a common baseline medication (Figure 5c).

Figure 5
Dealing with medication switchers. In some chronic conditions, it is impractical to study new users of second-line treatment because these patients are switchers from a first-line agent. The extension of the incident users design (a) is to compare new ...

A common question is, why not conduct a case-control study? Unless additional data will be collected at meaningful additional expense, there is no advantage in nesting a case-control study in a cohort if all data are already collected and stored electronically.16 There are no efficiencies to be gained; information on absolute rates and rate differences must be computed indirectly; and case-control studies are prone to errors in confounder adjustment (see Appendix 1 for a detailed discussion).

EXPOSURE RISK WINDOW AND OUTCOME DEFINITIONS

The exposure risk window is the time period during which the medication puts a patient at risk for a measurable outcome. The period often starts shortly after taking the first tablet and ends soon after taking the last, though notable exceptions include disruptions of the body's physiology by medications that put patients at risk long beyond bioavailability (e.g., immunosuppressant agents, methylating agents) and the study of incident cancer outcomes which will not be causally linked to a newly-used medication until after substantial lag time (Figure 3). Such exceptions aside, the exact form of the exposure risk window generally depends on the pharmacokinetics and pharmacodynamics of the drug as well as the outcome under study.15 Because of the clear temporality in cohort studies, it is fairly easy to vary the exposure risk window and to assess empirically the most likely underlying risk window.17 An as-treated (AT) analysis censors patients as soon as their exposure risk window ends.

This paper is written under the assumption that a Sentinel System has raised a signal based on a specific drug-outcome pair definition. At the point of signal evaluation, investigators might consider broadening or narrowing the outcome definition in a way compatible with the triggering association and current medical knowledge to gain a better understanding of the underlying causality. Such changes in outcomes are easily established in cohort studies.

Depending on the availability and results of prior validation studies, it might become necessary to validate all or a sample of outcomes.18 Case-control studies would be equally affected by the resulting delay of such validation.

SUBGROUP ANALYSES AND TREATMENT EFFECT HETEROGENEITY

Subgroup analyses enable better characterization of a hypothesized drug-event association. In a cohort study, it is simple to predefine multiple patient subgroups based on their baseline characteristics. The resulting analysis is straightforward, although there is no clear guidance on the issue of multiple testing.19,20 Patient subgroups that should be considered include duration of drug use and dose categories of the study drug.

BALANCING PATIENT CHARACTERISTICS

Confounding is a formidable threat to validity in non-randomized studies of treatment effects. A litany of options for reducing confounding is available to epidemiologists.21,22 Propensity score (PS) matching, however, has emerged as an expeditious and effective tool for adjusting large numbers of confounders, even if outcomes are infrequent.

A PS is the estimated probability of starting medication A versus starting medication B, conditional on pretreatment patient characteristics. Such prediction of treatment choice, based on preexisting patient characteristics, fits the structure of the proposed incident user cohort design. PS are known to balance large numbers of covariates in an efficient way even if the study outcome is rare, which fits the anticipated situation of most drug safety signals raised by a Sentinel System. Estimating the PS using logistic regression is mechanistically uncomplicated. Strategies for variable selection are well described,23 and potential confounders can be identified empirically in the study data.24 Macros for 1:1 greedy matching of patients who share the same estimated score but who received different treatments (Figure 6a) are available and perform well.25 Such matching will exclude patients in the extreme PS ranges where there is little clinical ambivalence in treatment choice; (Figure 6b). These tails of the PS distribution often harbor extreme patient scenarios that are not useful for the majority in clinical practice.9,26,27

Figure 6
(a) Achieving balanced cohorts. Matching on the estimated propensity for treatment initiation based on observed patient characteristics will lead to substantially improved balance between treatment groups. Here, propensity score distributions [0,1] are ...

One unjustly negative opinion of PS matching holds that if the treatment decision process can be modeled well with observed patient characteristics, a resulting PS will lead to substantial or even full separation of treated and untreated patients.28 This means that for patients initiated on a study drug, very few patients initiated on a comparison drug could be identified who had the same PS. This would leave few patients for analysis. In other words, treatment choice would be almost deterministic; little randomness would be left in the prescribing decision that could be exploited for inference about the drug effect.

Consider an example of such a situation, a comparison of combination ezetimibe and simvastatin (Vytorin) versus simvastatin alone. Assume that the health plan that provides the study data covers Vytorin only if LDL and HDL levels have crossed certain thresholds: every patient below those thresholds will use simvastatin alone. The LDL and HDL levels therefore become strong determinants of treatment choice, and including them in the PS estimation will lead to substantial if not complete separation of the PS distributions of the two treatment groups.

PS matching, therefore, serves as an important diagnostic. If situations occur where no matches can be found, it means that the specific comparison cannot be made validly in the study population. This is not a limitation of the method, but rather a very insightful description of a limitation inherent in the study population. The corresponding effect estimates from conventional multivariate outcome models will have substantial imprecision, reflecting the fact that few patients contribute to the estimation in such situations, despite a large study size. Investigators may want to reconsider the comparison agent and choose a more comparable drug or use another study population where there is less treatment separation in clinical practice.

In summary, PS matching embedded in an incident user cohort design is an effective covariate balancing tool that is robust against investigator error.

STATISTICAL ANALYSIS

In addition to the validity gained by multivariate PS matching, matched incident user cohort studies can be analyzed as easily as randomized trials. Since covariate adjustment is already achieved by matching, simple 2 × 2 tables can be constructed, and risk differences and risk ratios can be computed expeditiously with their 95% confidence intervals or Kaplan–Meier plots and log rank tests. The easy computation of multivariate adjusted additive effect measures (risk differences, numbers needed to treat) leads to valuable metrics for examining the balance of benefits and risks. Such metrics consider the baseline risk of each outcome, which may vary considerably between intended and unintended effects.

Standard analyses will include a cross-tabulation of all baseline characteristics by drug exposure, which will make transparent the extent to which covariate balancing by PS matching was achieved. If imbalances persist, further population restrictions should be considered.11 Displaying goodness-of-fit statistics for the PS model and duration of follow-up for each treatment group completes a first set of analyses.

SENSITIVITY ANALYSES

The proposed analyses flow from the study design and are sufficient to provide robust effect estimates quickly. The robustness of results should be measured by applying customary sensitivity analyses, as illustrated in Figure 7.

Figure 7
Basic design sensitivity analyses. Several sensitivity analyses are recommended to explore the robustness of results, considering limitations inherent in longitudinal healthcare databases

When using retrospective databases, one cannot contact patients and ask when they began using a drug for the first time. Therefore, incident users are identified empirically by a drug dispensing that was not preceded by a dispensing of the same drug for a defined time period, or washout period. This washout period is identical for all patients. A typical length is 6 months. In sensitivity analyses, this interval can be extended to 9 and 12 months. Increasing the length of the washout increases the certainty that patients are truly incident users, but it also reduces the number of patients eligible for the study, and thus reduces precision.

As discussed, there is often uncertainty about the right definition of the exposure risk window. This is further complicated in claims data, since the discontinuation date is imputed using the days supply field of the last dispensing. Varying the exposure risk window is therefore insightful as well as easy to accomplish in cohort studies.

Another set of sensitivity analyses concerns the potential for informative censoring. Patients change and discontinue treatment because they lack a treatment effect or experience early signs of a side effect (Figure 8). The stronger such non-adherence is associated with the outcome the more an as-treated (AT) analysis, which censors at the point of discontinuation, will be biased. A cumulative risk (CR) analysis follows all patients for a fixed time period, carrying forward the initial exposure status and disregarding any changes in treatment status over time (Figure 7). Because this analysis disregards informative non-adherence, it will not suffer bias as a consequence of censoring, but it will suffer bias as a consequence of exposure misclassification. Such misclassification increases with a longer follow-up period and a shorter average time to discontinuation. In most cases, though not all, such misclassification will bias effects towards the null, similar to intention-to-treat analyses in randomized trials.29 Viewed separately, the ATand CR analyzes trade biases, but together they give a range of plausible effect estimates.

Figure 8
Time-varying exposure. Patients may discontinue their study medication and either change their exposure status to the comparison drug or neither. Such exposure changes are often informed by treatment failure or perceived side effects

Adjusting for non-adherence in an analysis of a drug effect requires information about the predictors of treatment discontinuation,30 which is often not available with sufficient accuracy in secondary data.

Independent of the design, the sensitivity of findings toward residual confounding may be assessed by applying a set of predefined analyses, including the rule-out approach and array approach described elsewhere.22 Excel spreadsheets expedite this task and produce graphical illustrations of the effect estimate's sensitivity to possible residual confounding (Figure 9). A flowchart summarizing this basic design for expedited safety signal refutation is included in Appendix 2.

Figure 9
Sensitivity analysis of residual confounding. Residual confounding is easiest explored in the array approach or the rule out approach as a function of several parameters that may be informed by empirically derived values to varying degrees: ARR = apparent ...

DISCUSSION

This paper proposes a basic study design based on the incident user cohort design for expedited signal evaluation in longitudinal healthcare databases. This proposal will not resolve all methodological issues, nor will it fit all study questions arising within the framework of a Sentinel System. It should rather be seen as a guideline that will fit the majority of study questions and serve as a starting point for adaptations to specific pharmacoepidemiologic study questions. This paper focuses on the evaluation of prescription drugs, but given the availability of adequate data sources, this design proposal is equally applicable to other medical products.

One way to implement rapid safety signal refutation analyses is to start the study design process with the proposed approach in mind. As adaptations become necessary because of data limitations or specific concerns related to confounding control or other anticipated biases, these changes can be made explicitly, and their implications for validity can be discussed. Such an approach will expedite and structure the process of study development and highlight specific assumptions. This is particularly valuable in a Sentinel System, where signals are by definition preliminary and evaluation of signals is time-critical so that consumers can be informed about existing safety issues or, equally important, their likely absence. This proposal is not dependent on any specific implementation of a Sentinel System built on healthcare databases, but for the reasons given above, monitoring will likely focus on incident drug users.

If possible, the expedited primary analysis should be supplemented by other approaches that rest on different assumptions for valid inference, including instrumental variable analyses31 and case-crossover designs.8 Comparing evidence from a variety of data sources and analysis types may substantially strengthen the evidence base for regulatory decision makers.32

Readers may want to think of the last few pharmacoepidemiologic studies they have performed and mentally begin the design process from scratch using the proposed approach. Likely you will realize that most studies can be designed following this approach and adapted a bit here or there to best fit your study questions and accommodate your external constraints. Thinking explicitly about your adaptations will draw your attention to potential trade-offs between validity and precision.

KEY POINTS

  • Signal generation in an active drug safety monitoring program needs to be followed by expedited signal evaluation.
  • The proposed basic study design builds on an incident user design with multivariate propensity score matching.
  • The basic study design can be implemented expeditiously, is operationally robust, reduces investigator error, and will fit the majority of study situations.
  • The design is a guidance and should be adapted as necessary.

ACKNOWLEDGEMENTS

Funded by grants from the National Library of Medicine (RO1-LM010213; RC1-LM010351) and the National Center for Research Resources (RC1-RR028231). Dr Schneeweiss is principal investigator of the Brigham and Women's Hospital DEcIDE Center on Comparative Effectiveness Research funded by AHRQ and of the Harvard–Brigham Drug Safety and Risk Management Research Center contracted by FDA. Dr. Schneeweiss is an investigator of the Mini-Sentinel project funded by FDA (PI : Dr. Richard Platt), however, the opinions expressed here and any errors are his own. Opinions expressed here are only those of the author and not necessarily those of the agencies. Dr Schneeweiss is paid member of scientific advisory boards from HealthCore and ii4sm and received consulting fees from WHISCON, RTI Health Solutions, The Lewin Group, and HealthCore.

APPENDIX 1: ON CASE CONTROL STUDIES IN LONGITUDINAL HEALTHCARE DATABASES

Well-understood strengths and limitations of case-control studies

Case-control sampling nested within a cohort study will in expectation produce the same rate ratio estimates as a full cohort analysis.33,34 So why not perform case-control studies in large healthcare databases? It is well understood that case-control studies are not able to estimate absolute incidence rates and rate differences unless the sampling fractions of cases and controls are known, which means the underlying cohort needs to be enumerated.35 Rate differences are an important metric for benefit/risk assessment and population impact. Once the underlying cohort needs to be identified why not implement a cohort study in the first place?

Another, practical limitation of case-control studies is that unless multiple case-control studies are implemented it is not possible to study multiple outcomes, something that is often of interest in drug safety research to establish risk profiles. In contrast, cohort studies can study multiple outcomes as well as multiple exposures.36

The main limitation of cohort studies, the large size required to study rare events, is less of an issue in large databases. If it were an issue in a specific study, case-control studies embedded in the same database would suffer form the same limitation.

Unless additional information like biomarkers, detailed diagnostic information, or patient survey data are collected to enrich the longitudinal healthcare database at additional cost there is no reason for embedding a case-control study in an already existent cohort for which all data are already collected. All biases that may occur in the underlying cohort study will also affect the case-control study nested in the cohort.

Chronology of confounder assessment in case-control studies: a cautionary note

In addition to the lack of an advantage of case-control studies in a database setting they often raise issues of the accurate chronological sequence of confounder assessment in longitudinal claims or Electronic medical records (EMR) data. Figure A1 depicts three case or control patients. For this example their case status is immaterial. Drug exposure (blue box) is longitudinal, not a one-point exposure, and may be episodic. Two typical choices of covariate assessment periods are indicated in black shade. A covariate assessment period preceding the first exposure as shown for Patient 1 is equivalent to the incident user cohort design proposed here. However, the window immediately preceding the case/control index date, a common choice illustrated in Patients 2 and 3, is sometimes exposed to the drug (Patient 2) or post-treatment (Patient 3), and thus covariates are subject to the drug effect and possibly on the causal pathway. Think of a case-control study on the effects of high-dose rofecoxib on myocardial infarction. Covariate adjustment should include hypertension, an independent risk factor of Myocardial infarction (MI). However, if the assessment is based on the time period just before the case/control index date hypertension could be the consequence of rofecoxib use and thus adjusting for it would bias results toward the null. This can be avoided by placing the exposure assessment period before the initial drug use (Patient 1) and thus the case-control study is no different from a incident user cohort study except for the disadvantages inherent to case-control analyses already discussed.

Figure A1
Trouble with covariate assessment in case-control studies embedded in longitudinal claims data

Lack of operational efficiencies of case-control studies in longitudinal healthcare data

In addition to the concern about the chronology of confounder identification, there is no or very limited operational gain when choosing a case-control study approach. As mentioned the entire cohort that gives rise to the cases will have to be enumerated in order to estimate incidence rates. Once a cohort of incident users is established that gives rise to the cases, it is operationally easy to identify exposure and covariates for all patients at that point. In a case-control setting computer code has to be written to identify covariates for all cases and the selected controls. There is no difference in programming when extending this algorithm to all patients in the underlying cohort. The additional computing time is usually negligible compared to the programming time. Time-varying exposures can be assessed in both designs; this complicates the analysis by assuming that treatment change is independent of risk factors for the outcome. If this assumption does not hold then other methods like g-computation37 or marginal structural models30 are necessary as discussed in the paper. Again, there is not advantage of case-control studies.

APPENDIX 2: A FLOW CHART OF THE PROPOSED BASIC INCIDENT USER COHORT DESIGN FOR EXPEDITED SIGNAL EVALUATION

An external file that holds a picture, illustration, etc.
Object name is nihms-202990-f0010.jpg
An external file that holds a picture, illustration, etc.
Object name is nihms-202990-f0011.jpg

Footnotes

This manuscript was presented at a meeting convened by the Engelberg Center at The Brookings Institute, in collaboration with the Centers for Education and Research on Therapeutics (CERTs) on ‘Methods, Tools, and Scientific Operations for the Sentinel System’ chaired by Dr Rich Platt and Dr Mark McClellan, Washington, DC, 7 May 2009.

REFERENCES

1. Platt R, Wilson M, Chan KA, Benner JS, Marchibroda J, McClellan M. The new Sentinel Network–improving the evidence of medical-product safety. N Engl J Med. 2009;361:645–647. [PubMed]
2. Avorn J, Schneeweiss S. Managing drug-risk information–what to do with all those new numbers. N Engl J Med. 2009;361:647–649. [PubMed]
3. US Food and Drug Administration FDA's Sentinel Initiative. http://www.fda.gov/Safety/FDAsSentinelInitiative/default.htm.
4. Walker AM. Orthogonal predictions: follow-up questions for suggestive data. Pharmacoepidemiol Drug Saf. 2010 in press. [PubMed]
5. Schneeweiss S, Glynn RJ, Tsai EH, Avorn J, Solomon DH. Adjusting for unmeasured confounders in pharmacoepidemiologic claims data using external information: the example of COX2 inhibitors and myocardial infarction. Epidemiology. 2005;16:17–24. [PubMed]
6. Seeger JD, Walker AM, Williams PL, Saperia GM, Sacks FM. A propensity score-matched cohort study of the effect of statins, mainly fluvastatin, on the occurrence of acute myocardial infarction. Am J Cardiol. 2003;92:1447–1451. [PubMed]
7. Little RJ, Rubin DB. Causal effects in clinical and epidemiological studies via potential outcomes: concepts and analytical approaches. Annu Rev Public Health. 2000;21:121–145. [PubMed]
8. Maclure M. The case-crossover design: a method for studying transient effects on the risk of acute events. Am J Epidemiol. 1991;133:144–153. [PubMed]
9. Schneeweiss S, Avorn J. A review of uses of health care utilization databases for epidemiologic research on therapeutics. J Clin Epidemiol. 2005;58:323–337. [PubMed]
10. Ray WA. Evaluating medication effects outside of clinical trials: new-user designs. Am J Epidemiol. 2003;158:915–920. [PubMed]
11. Schneeweiss S, Patrick AR, Sturmer T, et al. Increasing levels of restriction in pharmacoepidemiologic database studies of elderly and comparison with randomized trial results. Med care. 2007;45:S131–S142. [PMC free article] [PubMed]
12. Schisterman EF, Cole SR, Platt RW. Overadjustment bias and unnecessary adjustment in epidemiologic studies. Epidemiology. 2009;20:488–495. [PMC free article] [PubMed]
13. Suissa S. Immortal time bias in pharmaco-epidemiology. Am J Epidemiol. 2008;167:492–499. [PubMed]
14. Moride Y, Abenhaim L. Evidence of the depletion of susceptibles effect in non-experimental pharmacoepidemiologic research. J Clin Epidemiol. 1994;47:731–737. [PubMed]
15. Solomon DH, Lunt M, Schneeweiss S. The risk of infection associated with tumor necrosis factor α antagonists: making sense of epidemiologic evidence. Arthritis Rheum. 2008;58:919–928. [PubMed]
16. Wacholder S. Practical considerations in choosing between the case-cohort and nested case-control designs. Epidemiology. 1991;2:155–158. [PubMed]
17. McMahon AD, Evans JM, McGilchrist MM, McDevitt DG, MacDonald TM. Drug exposure risk windows and unexposed comparator groups for cohort studies in pharmacoepidemiology. Pharmacoepidemiol Drug Saf. 1998;7:275–280. [PubMed]
18. Kiyota Y, Schneeweiss S, Glynn RJ, Cannuscio CC, Avorn J, Solomon DH. Accuracy of Medicare claims-based diagnosis of acute myocardial infarction: estimating positive predictive value on the basis of review of hospital records. Am Heart J. 2004;148:99–104. [PubMed]
19. Rothman KJ. No adjustments are needed for multiple comparisons. Epidemiology. 1990;1:43–46. [PubMed]
20. Lagakos SW. The challenge of subgroup analyses–reporting without distorting. N Engl J Med. 2006;354:1667–1669. [PubMed]
21. McMahon AD. Approaches to combat with confounding by indication in observational studies of intended drug effects. Pharmacoepidemiol Drug Saf. 2003;12:551–558. [PubMed]
22. Schneeweiss S. Sensitivity analysis and external adjustment for unmeasured confounders in epidemiologic database studies of therapeutics. Pharmacoepidemiol Drug Saf. 2006;15:291–303. [PubMed]
23. Brookhart MA, Schneeweiss S, Rothman KJ, Glynn RJ, Avorn J, Sturmer T. Variable selection for propensity score models. Am J Epidemiol. 2006;163:1149–1156. [PMC free article] [PubMed]
24. Schneeweiss S, Rassen JA, Glynn RJ, Avorn J, Mogun H, Brookhart MA. High-dimensional propensity score adjustment in studies of treatment effects using health care claims data. Epidemiology. 2009;20:512–522. [PMC free article] [PubMed]
25. Austin PC. A critical appraisal of propensity-score matching in the medical literature between 1996 and 2003. Stat Med. 2008;27:2037–2049. [PubMed]
26. Kurth T, Walker AM, Glynn RJ, et al. Results of multivariable logistic regression, propensity matching, propensity adjustment, and propensity-based weighting under conditions of nonuniform effect. Am J Epidemiol. 2006;163:262–270. [PubMed]
27. Lunt M, Solomon D, Rothman K, et al. Different methods of balancing covariates leading to different effect estimates in the presence of effect modification. Am J Epidemiol. 2009;169:909–917. [PMC free article] [PubMed]
28. Sekula P, Caputo A, Dunant A, et al. An application of propensity score methods to estimate the treatment effect of corticosteroids in patients with severe cutaneous adverse reactions. Pharmacoepidemiol Drug Saf. 2010;19:10–18. [PubMed]
29. Piantadosi S. Clinical Trials—A Methodologic Perspective. Wiley; New York: 1997.
30. Hernan MA, Alonso A, Logan R, et al. Observational studies analyzed like randomized experiments: an application to postmenopausal hormone therapy and coronary heart disease. Epidemiology. 2008;19:766–779. [PMC free article] [PubMed]
31. Brookhart MA, Wang PS, Solomon DH, Schneeweiss S. Evaluating short-term drug effects using a physician-specific prescribing preference as an instrumental variable. Epidemiology. 2006;17:268–275. [PMC free article] [PubMed]
32. Schneeweiss S, Setoguchi S, Brookhart A, Dormuth C, Wang PS. Risk of death associated with the use of conventional versus atypical antipsychotic drugs among elderly patients. CMAJ. 2007;176:627–632. [PubMed] [see comment][erratum appears in CMAJ. 2007 May 22;176(11):1613].
33. Miettinen O. Estimability and estimation in case-referent studies. Am J Epidemiol. 1976;103:226–235. [PubMed]
34. Wacholder S. The case-control study as data missing by design: estimating risk differences. Epidemiology. 1996;7:144–150. [PubMed]
35. Rothman KJ. Modern Epidemiology. 3rd Edition Lippincott Williams & Wilkins; Philadelphia: 2008.
36. Roumie CL, Choma NN, Kaltenbach L, Mitchel EF, Jr, Arbogast PG, Griffin MR. Non-aspirin NSAIDs, cyclooxygenase-2 inhibitors and risk for cardiovascular events-stroke, acute myocardial infarction, and death from coronary heart disease. Pharmacoepidemiol Drug Saf. 2009;18:1053–1063. [PubMed]
37. Robins JM. Correcting for non-compliance in randomized trials using structural nested mean models. Commun Stat Theory Methods. 1994;23:2379–2412.
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • Cited in Books
    Cited in Books
    PubMed Central articles cited in books
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...