NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Gliklich RE, Dreyer NA, Leavy MB, editors. Registries for Evaluating Patient Outcomes: A User's Guide [Internet]. 3rd edition. Rockville (MD): Agency for Healthcare Research and Quality (US); 2014 Apr.

Cover of Registries for Evaluating Patient Outcomes

Registries for Evaluating Patient Outcomes: A User's Guide [Internet]. 3rd edition.

Show details

13Analysis, Interpretation, and Reporting of Registry Data To Evaluate Outcomes

1. Introduction

Registries have the potential to produce databases that are an important source of information regarding health care patterns, decisionmaking, and delivery, as well as the subsequent association of these factors with patient outcomes. Registries, for example, can provide valuable insight into the safety and/or effectiveness of an intervention or the efficiency, timeliness, quality, and patient centeredness of a health care system. The utility and applicability of registry data rely heavily on the quality of the data analysis plan and its users' ability to interpret the results. Analysis and interpretation of registry data begin with a series of core questions:

  • Study purpose: Were the objectives/hypotheses predefined or post hoc?
  • Patient population: Who was studied?
  • Data quality: How were the data collected, reviewed, and verified?
  • Data completeness: How were missing data handled?
  • Data analysis: How were the analyses chosen and performed?

While registry data present many opportunities for meaningful analysis, there are inherent challenges to making appropriate inferences. A principal concern with registries is that of making inferences without regard to the quality of data, since quality standards have not been previously well established or consistently reported. In some registries, comparison groups may not be robustly defined, and information provided about the external validity of a registry sample is often limited. These factors must be considered when making inferences based on analyses of registry data.1

This chapter explains how analysis plans are constructed for registries, how they differ depending on the registry's purpose, and how registry design and conduct can affect analysis and interpretation. The analytic techniques generally used for registry data are presented, addressing how conclusions may be drawn from the data and what caveats are appropriate. The chapter also describes how timelines for data analysis can be built in at registry inception and how to determine when the registry data are complete enough to begin analysis.

2. Hypotheses and Purposes of the Registry

While it may be relatively straightforward to develop hypotheses for registries intended to evaluate safety and effectiveness, not all registries have specific, testable, or simple hypotheses. Disease registries commonly have aims that are primarily descriptive, such as describing the typical clinical features of individuals with a disease, variations in phenotype, and the clinical progression of the disease over time (i.e., natural history). These registries play a particularly important role in the study of rare diseases.

In the case of registries where the aim is to study the associations between specific exposures and outcomes, prespecification of the study methodology and presence or absence of a priori hypotheses or research questions may affect the acceptance of results of studies derived from registry data. The many possible scenarios are well illustrated by examples at the theoretical extremes.

On one extreme, a study may evolve out of a clear and explicit prespecified research question and hypothesis. In such a study, there may have been preliminary scientific work that laid the conceptual foundation and plausibility for the proposed study. The investigators fully articulate the objectives and analytic plan before embarking on any analysis. The outcome is clearly defined and the statistical approach documented. Secondary analyses are identified and may be highlighted as hypothesis generating. The investigators have no prior knowledge of analyses in this database that would bias them in the formulation of their study objective. The study is conducted and published regardless of the result. The paper states clearly that the objective and hypothesis were prespecified. For registries intended to support national coverage determinations with data collection as a condition of coverage, the specific coverage decision question may be specified a priori as the research question in lieu of a formal hypothesis.

At the other extreme, a study may evolve out of an unexpected observation in a database in the course of doing analyses for another purpose. A study could also evolve from a concerted effort to discover associations—for example, as part of a large effort to understand disease causation. In such a study, the foundation for the study is developed post hoc, or after making the observation. Because of the way in which the observation was found, the rationale for the study is developed retrospectively. The paper publishing this study's results does not clearly state that the objective and hypothesis were not prespecified.

Of course, many examples fall between these extremes. An investigator may suspect an association for many variables but find the relationship for only one of them. The investigator decides to pursue only the positive finding and develop a rationale for a study or grant. The association was sought, but it was sought along with associations for many other variables and outcomes.

Thus, while there is substantial debate about the importance of prespecified hypotheses,2, 3 there is general agreement that it is informative to reveal how the study was developed. Transparency in methods is needed so that readers may know whether these studies are the result of hypotheses developed independently of the study database, or whether the question and analyses evolved from experience with the database and multiple iterations of exploratory analyses. Both types of studies have value.

3. Patient Population

The purpose of a registry is to provide information about a specific patient population to which all study results are meant to apply. To determine how well the study results apply to the target population, five populations, each of which is a subset of the preceding population, need to be considered, along with how well each population represents the preceding population. These five subpopulations are shown in Figure 13–1.

The figure uses a series of boxes connected by arrows to depict the five sub-populations. The first box is the target population, which is described as the population to which the study findings are meant to apply. The second box is the accessible population, which is described as the subset of the target population who are specifically defined and available for study. The third population is the intended population, which is described as the members of the accessible population who are sampled according to the registry design. The fourth population is the actual population, which is described as the people who actually participate in the registry. The fifth population is the analytic population, which is described as the patients that meet the criteria for analysis.

Figure 13–1

Patient populations.

The target population is defined by the study's purpose. To assess the appropriateness of the target population, one must ask the question, “Is this really the population that we need to know about?” For example, the target population for a registry of oral contraceptive users would include women of childbearing age who could become pregnant and are seeking to prevent pregnancy. Studies often miss important segments of the population in an effort to make the study population more homogeneous. For example, a study to assess a medical device used to treat patients for cardiac arrhythmias that defines only men as its target population would be less informative than it could be, because the device is designed for use in both men and women.

The accessible population is defined using inclusion criteria and exclusion criteria. The inclusion criteria define the population that will be used for the study and generally include geographic (e.g., hospitals or clinics in the New England region), demographic, disease-specific, and temporal (e.g., specification of the included dates of hospital or clinic admission), as well as other criteria. Conversely, the exclusion criteria seek to eliminate specific patients from study and may be driven by an effort to assure an adequate-sized population of interest for analysis. The same goals may be said of inclusion criteria, since it is difficult to separate inclusion from exclusion criteria (e.g., inclusion of adults aged 18 and older vs. exclusion of children younger than 18).

The accessible population may lose representativeness to the extent that convenience plays a part in its determination, because people who are easy to enroll in the registry may differ in some critical respects from the population at large. Similarly, to the extent that homogeneity plays a part in determining the accessible population, it is less likely to be representative of the entire population because certain population subgroups will be excluded.

Factors to be considered in assessing the accessible population's representativeness of the target population include all the inclusion and exclusion criteria mentioned above. One method of evaluating representativeness is to describe the demographics and other key descriptors of the registry study population and to contrast its composition with patients with similar characteristics who are identified from an external database, such as might be obtained from health insurers, health maintenance organizations, or the U.S. Surveillance Epidemiology and End Results (SEER) cancer registries.4

However, simple numerical/statistical representativeness is not the main issue. Representativeness should be evaluated in the context of the purpose of the study—that is, whether the study results can reasonably be generalized or extrapolated to other populations of interest outside of those included in the accessible population. (See Case Example 26.) For example, suppose that the purpose of the study is to assess the effectiveness of a drug in U.S. residents with diabetes. If the accessible population includes no children, then the study results may not apply to children, since children often metabolize drugs very differently from adults.

On the other hand, consider the possibility that the accessible population is generally drawn from a geographically isolated region, whereas the target population may be the entire United States or the world. In that case, the accessible population is not geographically representative of the target population, but that circumstance would have little or no impact on the representativeness of the study findings to the target population if the action of the drug (or its delivery) does not vary geographically (which we would generally expect to be the case, unless pertinent racial/genetic or dietary factors were involved). Therefore, in this example, the lack of geographical representativeness would not affect interpretation of results.

The reason for using an intended population rather than the whole accessible population for the study is simply a matter of convenience and practicality. The issues to consider in assessing how well the intended population represents the accessible population are similar to those for assessing how well the accessible population represents the target population. The main difference is that the intended population may be specified by a sampling scheme, which often tries to strike a balance among representativeness, convenience, and budget. If the intended population is a random sample of the accessible population, it may be reasonably assumed that it will represent the accessible population; however, for many, if not most, registries, a complete roster of the accessible population does not exist. More commonly, the intended population is compared with the accessible population in terms of pertinent variables.

To the extent that convenience or other design (e.g., stratified random sample) is used to choose the intended population, one must consider the extent to which the sampling of the accessible population—by means other than random sampling—has decreased the representativeness of the intended population. For example, suppose that, for the sake of convenience, only patients who attend clinic on Mondays are included in the study. If patients who attend clinic on Mondays are similar in every relevant respect to other patients, that may not constitute a limitation. But if Monday patients are substantially different from patients who attend clinic on other days of the week (e.g., well-baby clinics are held on Mondays) and if those differences affect the outcome that is being studied (e.g., proportion of baby visits for “well babies”), then that sampling strategy would substantially alter the interpretations from the registry and would be considered a meaningful limitation.

The extent to which the actual population is not fully representative of the intended population is generally a matter of real-world issues that prevent the initial inclusion of study subjects or adequate followup. In assessing representativeness, one must consider the likely underlying factors that caused those subjects not to be included in the analysis of study results and how that might affect the interpretations from the registry. For example, consider a study of a newly introduced medication, such as an anti-inflammatory drug that is thought to be as effective as other products and to have fewer side effects but that is more costly. Inclusion in the actual population may be influenced by prescribing practices governed by a health insurer. For example, if a new drug is approved for reimbursement only for patients who have “failed” treatment with other anti-inflammatory products, the resulting actual population will be systematically different from the target population of potential anti-inflammatory drug users. The actual population may be refractory to treatment or may have more comorbidities (e.g., gastrointestinal problems), and may be specifically selected for treatment beyond the intention of the study-specified inclusion criteria. In fact, registries of newly introduced drugs and devices may often include patients who are different from the ultimate target population.

Finally, the analytic population includes all those patients who meet the criteria for analysis. In some cases, it becomes apparent that there are too few cases of a particular type, or too few patients with certain attributes, such that these subgroups do not contribute enough information for meaningful analysis. Patients may also be excluded from the analysis population because their conditions are so rare that to include them could be considered a breach of patient confidentiality. Analytic populations are also created to meet specific needs. For example, an investigator may request a data set that will be used to analyze a subset of the registry population, such as those who had a specific treatment or condition.

A related issue is that of early adopters,5 in which practitioners who are quick to use a novel health care intervention or therapy differ from those who use it only once it is well established. For example, a registry of the use of a new surgical technique may initially enroll largely academic physicians and only much later enroll community-based surgeons. If the outcomes of the technique differ between the academic surgeons (early adopters) and community-based surgeons (later adopters), then the initial results of the registry may not reflect the true effectiveness of the technique in widespread use. Patients selected for treatment with a novel therapy may also differ with regard to factors such as severity or duration of disease and prior treatment history, including treatment failures. For example, patients with more severe or late-stage disease who have failed other treatments might be more likely to use a newly approved product that has shown efficacy in treating their condition. Later on, patients with less severe disease may start using the product.

Finally, patients who are included in the analytic population for a given analysis of registry data may also be subject to selection or inclusion criteria (admissibility criteria), and these may affect interpretation of the resulting analyses. (See Chapter 18.) For example, if only patients who remain enrolled and attend followup visits through 2 years after study initiation are included in analysis of adherence to therapy, it is possible or likely that adherence among those who remain enrolled in the study and have multiple followup visits will be different from adherence among those who do not. Differential loss to followup, whereby patients who are lost may be more likely to experience adverse outcomes, such as mortality, than those who remain under observation, is a related issue that may lead to biased results. (See Chapter 3.)

4. Data Quality Issues

In addition to a full understanding of study design and methodology, analysis of registry events and outcomes requires an assessment of data quality. One must consider whether most or all important covariates were collected, whether the data were complete, and whether the problem of missing data was handled appropriately, as well as whether the data are accurate.

4.1. Collection of All Important Covariates

While registries are generally constructed for a particular purpose or purposes, registry information may be collected for one purpose (e.g., provider performance feedback) and then used for another (e.g., addressing a specific clinical research question). When using an available database for additional purposes, one needs to be sure that all the information necessary to address a specific research question was collected in a manner that is sufficient to answer the question.

For example, suppose the research question addresses the comparative effectiveness of two treatments for a given disease using an existing registry. To be meaningful, the registry should have accurate, well-defined, and complete information, including potential confounding and effect-modifying factors; population characteristics of those with the specified disease; exposures (whether patients received treatment A or B); and patient outcomes of interest. Confounding factors are variables that influence both the exposure (treatment selection) and the outcome in the analyses. These factors can include patient factors (age, gender, race, socioeconomic factors, disease severity, or comorbid illness); provider factors (experience, skills); and system factors (type of care setting, quality of care, or regional effects). While it is not possible to identify all confounding factors in planning a registry, it is desirable to give serious thought to what will be important and how the necessary data can be collected. While effect modification is not a threat to validity, it is important to consider potential effect modifiers for data collection and analysis in order to evaluate whether an association varies within specific subgroups.6 Analysis of registries requires information about such variables so that the confounding covariates can be accounted for, using one of several analytic techniques covered in upcoming sections of this chapter. In addition, as described in Chapter 3, eligibility for entry into the registry may be restricted to individuals within a certain range of values for potential confounding factors in order to reduce the effects of these factors. Such restrictions may also affect the generalizability of the registry.

4.2. Data Completeness

Assuming that a registry has the necessary data elements, the next step is to ensure that the data are complete. Missing data can be a challenge for any registry-based analysis. Missing data include situations in which a variable is directly reported as missing or unavailable, a variable is “nonreported” (i.e., the observation is blank), the reported data may not be interpretable, or the value must be imputed to be missing because of data inconsistency or out-of-range results. Before analyzing a registry database, the database should be “cleaned” (discussed in Chapter 11, Section 2.5.), and attempts should be made to obtain as much missing data as realistically possible from source documents. Inconsistent data (e.g., a “yes” answer to a question at one point and “no” to the same question at another) and out-of-range data (e.g., a 500-year-old patient) should be corrected when possible. Finally, the degree of data completeness should be summarized for the researcher and eventual consumer of analyses from the registry. Detailed examples of sources of incomplete data are described in Chapter 18.

4.3. Missing Data

The intent of any analysis is to make valid inferences from the data. Missing data can threaten this goal both by reducing the information yield of the study and, in many cases, by introducing bias. A thorough review of types of missing data with examples can be found in Chapter 18. Briefly, the first step is to understand which data are missing. The second step is to understand why the data are missing (e.g., missing item-response or right censoring). Finally, missing data fall into three classic categories of randomness:7

  • Missing completely at random (MCAR): Instances where there are no differences between subjects with missing data and those with complete data. In such random instances, missing data only reduce study power without introducing bias.
  • Missing at random (MAR): Instances where missing data depend on known or observed values but not unmeasured data. In such cases, accounting for these known factors in the analysis will produce unbiased results.
  • Missing not at random (MNAR): Here, missing data depend on events or factors not measured by the researcher and thus potentially introduce bias.

To gain insight into which of the three categories of missing data are in play, one can compare the distribution of observed variables for patients with specific missing data to the distribution of those variables for patients for whom those same data are present.

While pragmatically it is difficult to determine whether data are MCAR or MAR, there are, nonetheless, several means of managing missing data within an analysis. For example, a complete case strategy limits the analysis to patients with complete information for all variables. This is the default strategy used in many standard analytic packages (e.g., SAS, Cary, NC). A simple deletion of all incomplete observations, however, is not appropriate or efficient in all circumstances, and it may introduce significant bias if the deleted cases are substantively different from the retained, complete cases (i.e., not MCAR). In observational studies with prospective, structured data collection, missing data are not uncommon, and the complete case strategy is inefficient and not generally used. For example, patients with diabetes who were hospitalized because of inadequate glucose control might not return for a scheduled followup visit at which HbA1c was to be measured. Those missing values for HbA1c would probably differ from the measured values because of the reason for which they were missing, and they would be categorized as MNAR. In an example of MAR, the availability of the results of certain tests or measurements may depend on what is covered by patients' health insurance (a known value), since registries do not typically pay for testing. Patients without this particular measurement may still contribute meaningfully to the analysis. In order to include patients with missing data, one of several imputation techniques may be used to estimate the missing data.

Imputation is a common strategy in which average values are substituted for missing data using strategies such as unconditional and conditional mean, multiple hot-deck, and expectation maximum, among others.7, 8 For data that are captured at multiple time points or repeated measures, investigators often “carry forward” a last observation. However, such a technique can be problematic if early dropouts occur and a response variable is expected to change over time or when the effect of the exposure (or treatment) is intermittent. Worst-case imputation is another means of substitution in which investigators test the sensitivity of a finding by substituting a worst-case value for all missing results. While this is conservative, it offers a lower bound to an association rather than an accurate assessment. One particular imputation method that has received significant attention in recent analyses has been termed multiple imputation. Rubin first proposed the idea to impute more than one value for a missing variable as a means of reflecting the uncertainty around this value.9 The general strategy is to replace a missing value with multiple values from an approximate distribution for missing values. This produces multiple complete data sets for analysis from which a single summary finding is estimated.

There are several issues concerning how prognostic models for decisionmaking can be influenced by data completeness and missing data.10 Burton and Altman reviewed 100 multivariable cancer prognostic models published in seven leading cancer journals in 2002. They found that the proportion of complete cases was reported in only 39 studies, while the percentage missing for important prognostic variables was reported in 52 studies. Comparison of complete cases with incomplete cases was provided in 10 studies, and the methods used to handle missing data were summarized in 32 studies. The most common techniques used for handling missing data in this review article were (a) complete case analysis (12), (b) dropping variables with high numbers of missing cases from model consideration (6), and (c) using some simple author imputation rule (6). Only one study reported using multiple imputation. The reviewers concluded that there was room for improvement in the reporting and handling of missing data within registry studies

Readers interested in learning more about methods for handling missing data and the potential for bias are directed to other useful resources by Greenland and Finkle,11 Hernán and colleagues,12 and Lash, Fox, and Fink.13

It is important to keep in mind that the impact of data completeness will differ, depending on the extent of missing data and the intended use of the registry. It may be less problematic with regard to descriptive research than research intended to support decisionmaking. For all registries, it is important to have a strategy for how to identify and handle missing data as well as how to explicitly report on data completeness to facilitate interpretation of study results. For more information on other specific types of missing data issues (e.g., left truncation), please see Chapter 18.

4.4. Data Accuracy and Validation

While observational registry studies are usually not required to meet U.S. Food and Drug Administration and International Conference on Harmonisation standards of Good Clinical Practice developed for clinical trials, sponsors and contract research organizations that conduct registry studies are responsible for ensuring the accuracy of study data to the extent possible. Detailed plans for site monitoring, quality assurance, and data verification should be developed at the beginning of a study and adhered to throughout its lifespan. Chapter 11 discusses in detail approaches to data collection and quality assurance, including data management, site monitoring, and source data verification.

Ensuring the accuracy and validity of data and programming at the analysis stage requires additional consideration. The Office of Surveillance and Epidemiology (OSE) of the Food and Drug Administration's Center for Drug Evaluation and Research uses the manual Standards of Data Management and Analytic Process in the Office of Surveillance and Epidemiology for analyses of databases conducted within OSE; the manual addresses many of these issues and may be consulted for further elaboration on these topics.14 Topics addressed that pertain to ensuring the accuracy of data just before and during analysis include developing a clear understanding of the data at the structural level of the database and variable attributes; creating analytic programs with careful documentation and an approach to variable creation and naming conventions that is straightforward and, when possible, consistent with the Clinical Data Interchange Standards Consortium initiative; and complete or partial verification of programming and analytic data set creation by a second analyst.

For more detail about validation substudies, please see Chapter 18.

5. Data Analysis

This section provides an overview of practical considerations for analysis of data from a registry. As the name suggests, a descriptive study focuses on describing frequency and patterns of various elements of a patient population, whereas an analytical study focuses on examining associations between patients or treatment characteristics and health outcomes of interest (e.g., comparative effectiveness).

Statistical methods commonly used for descriptive purposes include those that summarize information from continuous variables (e.g., mean, median) or from categorical variables (e.g., proportions, rates). Registries may describe a population using incidence (the proportion of the population that develops the condition over a specified time interval) and prevalence (the proportion of the population that has the condition at a specific point in time). Another summary estimate that is often used is an incidence rate. The incidence rate (also known as absolute risk) takes into account both the number of people in a population who develop the outcome of interest and the person-time at risk, or the length of time contributed by all people during the period when they were in the population and the events were counted.

For studies that include patient followup, an important part of the description of study conduct is to characterize how many patients are “lost,” or drop out, during the course of the registry, at what point they are lost, and if they return. Lasagna plots are one convenient method to visually assess missing data over time when conducting a longitudinal analysis.15 Figure 13–2 illustrates key points of information that provide a useful description of losses to followup and study dropouts.

This flowchart illustrates the process by which potential participants in a study are reduced, from the initial point at which eligibility is assessed, through the point at which they consent to participate in the study or refuse to do so, through successive points at which participants may be lost to followup.

Figure 13–2

The flow of participants into an analysis. Tooth L, Ware R, Bain C. Quality of reporting of observational longitudinal research. Am J Epidemiol 2005; 161(3):280–8. Reprinted with permission. Copyright restrictions apply. By permission of Oxford (more...)

For analytical studies, the association between a risk factor and outcome may be expressed as attributable risk, relative risk, odds ratio, or hazard ratio, depending on the nature of the data collected, the duration of the study, and the frequency of the outcome. Attributable risk, a concept developed in the field of public health and preventive medicine, is defined as the proportion of disease incidence that can be attributed to a specific exposure, and it may be used to indicate the impact of a particular exposure at a population level. The standard textbooks cited here have detailed discussions regarding epidemiologic and statistical methods commonly used for the various analyses supported by registries.6, 16, 17,18, 19

For analytical studies of data derived from observational studies such as registries, it is important to consider the role of confounding. Although those planning a study try to collect as much data as possible to address known confounders, there is always the chance that unknown confounders will affect the interpretation of analyses derived from observational studies. It is important to consider the extent to which bias (systematic error stemming from factors that are related to both the decision to treat and the outcomes of interest [confounders]) could have distorted the results. For example, selective prescribing (confounding by indication) results when people with more severe disease or those who have failed other treatments are more likely to receive newer treatments; these patients are systematically different from other patients who may be treated with the product under study. Misclassification in treatment can result from the patient's incorrect recall of dose, or poor adherence or treatment compliance. Other types of bias include detection bias20 (e.g., when comparison groups are assessed at different points in time or by different methods), selective loss to followup in which patients with the outcomes of most interest (e.g., sickest) may be more likely to drop out of one treatment group than another, and performance bias (e.g., systematic differences in care other than the intervention under study, such as a public health initiative promoting healthy lifestyles directed at patients who receive a particular class of treatment).

Confounding may be evaluated using stratified analysis, multivariable analysis, sensitivity analyses, and simple or quantitative bias analysis.12 Appropriate methods should be used to adjust for confounding. For example, if an exposure or treatment varies over time and the confounding variable also varies over time, traditional adjustment using conventional multivariable modeling will introduce selection bias. Marginal structural models use inverse probability weighting to account for time-dependent confounding without introducing selection bias.21 The extensive information and large sample sizes available in some registries also support use of more advanced modeling techniques for addressing confounding by indication, such as the use of propensity scores to create matched comparison groups, or for stratification or inclusion in multivariable risk modeling.22-25 New methods also include the high-dimensional propensity score (hd-PS) for adjustment using administrative data.26 The uptake of these approaches in the medical literature in recent years has been extremely rapid, and their application to analyses of registry data has also been broad. Examples are too numerous for a few selections to be fully representative, but registries in nearly every therapeutic area, including cancer,27 cardiac devices,28 organ transplantation,29 and rare diseases,30 have published the results of analyses incorporating approaches based on propensity scores. As noted in Chapter 3, instrumental variable methods present opportunities for assessing and reducing the impact of confounding by indication,31-33 but verification of the assumptions are important to ensure that an instrument is valid.34 Violations in the instrumental variable assumptions or the use of a weak instrument will lead to results more biased than those from conventional methods.35 While a variety of methods have been developed to address confounding, particularly confounding by indication, residual confounding may still be present even after adjustment; therefore, these methods may not fully control for unmeasured confounding.35 For specific examples of the application of these methods, please see Chapter 18. Information bias, such as misclassification, and selection bias are also threats to the validity of our findings and examples can be found in Chapter 18. For further information on how to quantify bias, please see Lash, Fox, and Fink.13

Groupings within a study population, such as patients seen by a single clinician or practice, residents of a neighborhood, or other “clusters,” may themselves impact or predict health outcomes of interest. Such groupings may be accounted for in analysis through use of analytic methods including analysis of variance (ANOVA), and hierarchical or multilevel modeling.36-39

Heterogeneity of treatment effect is also an important consideration for comparative effectiveness research as the effect of a treatment may vary within subgroups of heterogeneous patients.40 Stratification on the propensity score has been used to identify heterogeneity of treatment effect and may identify clinically meaningful differences between subgroups.

For economic analyses, the analytic approaches often encountered are cost-effectiveness analyses and cost-utility studies. To examine cost-effectiveness, costs are compared with clinical outcomes measured in units such as life expectancy or years of disease avoided.41 Cost-utility analysis, a closely related technique, compares costs with outcomes adjusted for quality of life (utility) using measures known as quality-adjusted life years. Since most new interventions are more effective but also more expensive, another analytic approach examines the incremental cost-effectiveness ratio and contrasts that to the willingness to pay. (Willingness-to-pay analyses are generally conducted on a country-by-country basis, since various factors relating to national health insurance practices and cultural issues affect willingness to pay.) The use of registries for cost-effectiveness evaluations is a fairly recent development, and consequently, the methods are evolving rapidly. More information about economic analyses can be found in standard textbooks.42-47

It is important to emphasize that cost-effectiveness analyses, much like safety and clinical effectiveness analyses, require collection of specific data elements suited to the purpose. Although cost-effectiveness-type analyses are becoming more important and registries can play a key role in such analyses, registries traditionally have not collected much information on quality of life or resource use that can be linked to cost data.48 To be used for cost-effectiveness analysis, registries must be developed with that purpose in mind.

5.1. Developing a Statistical Analysis Plan

5.1.1. Need for a Statistical Analysis Plan

It is important to develop a statistical analysis plan (SAP) that describes the analytical principles and statistical techniques to be employed in order to address the primary and secondary objectives, as specified in the study protocol or plan. Generally, the SAP for a registry study intended to support decisionmaking, such as a safety registry, is likely to be more detailed than the SAP for a descriptive study or health economics study. A registry may require a primary “master SAP” as well as subsequent, supplemental SAPs. Supplemental SAPs might be triggered by new research questions emerging after the initial master SAP was developed or might be needed because the registry has evolved over time (e.g., additional data collected, data elements revised). Although the evolving nature of data collection practices in some registries poses challenges for data analysis and interpretation, it is important to keep in mind that the ability to answer questions emerging during the course of the study is one of the advantages (and challenges) of a registry. In the specific case of long-term rare-disease registries, many of the relevant research questions of interest cannot be defined a priori but arise over time as disease knowledge and treatment experience accrue. Supplemental SAPs can be developed only when enough data become available to analyze a particular research question. At times, the method of statistical analysis may have to be modified to accommodate the amount and quality of data available. To the extent that the research question and SAP are formulated before the data analyses are conducted and results are used to answer specific questions or hypotheses, such supplemental analysis retains much of the intent of prespecification rather than being wide-ranging exploratory analyses (sometimes referred to as “fishing expeditions”). The key to success is to provide sufficient details in the SAP that, together with the study protocol and the case report forms, describe the overall process of the data analysis and reporting.

5.1.2. Preliminary Descriptive Analysis To Assist SAP Development

During SAP development, one particular aspect of a registry that is somewhat different from a randomized controlled study is the necessity to understand the “shape” of the data collected in the study by conducting a simple stratified analysis.15 This may be crucial for a number of reasons.

Given the broad inclusion criteria that most registries tend to propose, there might be a wide distribution of patients, treatment, and/or outcome characteristics. The distribution of age, for example, may help to determine if more detailed analyses should be conducted in the “oldest old” age group (80 years and older) to help understand health outcomes in this subgroup that might be different from those of their younger counterparts.

Unless a registry is designed to limit data collection to a fixed number of regimens, the study population may experience many “regimens,” considering the combination of various dose levels, drug names, frequency and timing of medication use (e.g., acute, chronic, intermittent), and sequencing of therapies. The scope and complexity of these variations constitute one of the most challenging aspects of analyzing a registry, since treatment is given at each individual physician's discretion. Grouping of treatment into regimens for analysis should be done carefully, guided by clinical experts in that therapeutic area. The full picture of treatment patterns may become clear only after a sizable number of patients have been enrolled. Consequently, the treatment definition in an SAP may be refined during the course of a study. Furthermore, there may be occasions where a particular therapeutic regimen is used in a much smaller number of patients than anticipated, so that specific study objectives focusing on this group of patients might become unfeasible. Also, the registry might have enrolled many patients who would normally be excluded from a clinical trial because of significant contraindications related to comorbidity or concomitant medication use. In this case, the SAP may need to define how these patients will be analyzed (either as a separate group or as part of the overall study population) and how these different approaches might affect the interpretation of the study results.

There is a need to evaluate the presence of potential sources of bias and, to the extent feasible, use appropriate statistical measures to address such biases. For example, the bias known as confounding by indication49 results from the fact that physicians do not prescribe medicine at random: the reason a patient is put on a particular regimen is often associated with their underlying disease severity and may, in turn, affect treatment outcome. (See Chapter 18 for more detailed discussion and examples.) To detect such a bias, the distribution of various prognostic factors at baseline is compared for patients who receive a treatment of interest and those who do not. A related concept is channeling bias, in which drugs with similar therapeutic indications are prescribed to groups of patients who may differ with regard to factors influencing prognosis.50 To detect such a bias, registry developers and users must document the characteristics of the treated and untreated participants and either demonstrate their comparability or use statistical techniques to adjust for differences where possible. (Additional information about biases often found in registries is detailed in Chapter 3, Section 10.) In addition to such biases, analyses need to account for factors that are interrelated, also known as effect modifiers.15 The presence of effect modification may also be identified after the data are collected. All of these issues should be taken into account in an SAP, based on understanding of the patient population in the registry.

5.2. Timing of Analyses during the Study

Unlike a typical clinical trial, registries, especially those that take several years to complete, may conduct intermediate analyses before all patients have been enrolled and/or all data collection has been completed. Such midcourse analyses may be undertaken for several reasons. First, many of these registries focus on serious safety outcomes. For such safety studies, it is important for all parties involved to actively monitor the frequency of such events at regular predefined intervals so that further risk assessment or risk management can be considered. The timing of such analyses may be influenced by regulatory requirements. Second, it may be of interest to examine treatment practices or health outcomes during the study to capture any emerging trends. Finally, it may also be important to provide intermediate or periodic analysis to document progress, often as a requirement for continued funding.

While it is useful to conduct such periodic analysis, careful planning should be given to the process and timing. The first questions are whether a sufficient number of patients have been enrolled and whether a sufficient number of events have occurred. Answers to both questions can be estimated based on the speed of enrollment and rate of patient retention, as well as the expected incidence rate of the event of interest. The second issue is whether sufficient time has elapsed after the initial treatment with a product so that it is biologically plausible for events to have occurred. (For example, some events, such as site reactions to injections, can be observed after a relatively short duration, compared with events like cancers, which may have a long induction or latency.) If there are too few patients or insufficient time has elapsed, premature analyses may lead to the inappropriate conclusion that there is no occurrence of a particular event. Similarly, uncommon events, occurring by random chance in a limited sample, may be incorrectly construed as a safety signal. However, it is inappropriate to delay analysis so long that an opportunity might be missed to observe emerging safety outcomes. Investigators should use sound clinical and epidemiological judgment when planning an intermediate analysis and, whenever possible, use data from previous studies to help to determine the feasibility and utility of such an analysis.

When planning the timing of the analysis, it may be helpful to consider substudies if emerging questions require data not initially collected. Substudies often involve data collection based on biological specimens or specific laboratory procedures. They may, for example, take the form of nested case-control studies. In other situations, a research question may be applicable only to a subset of patients, such as those who become pregnant while in the study. It may also be desirable to conduct substudies among patients in a selected site or patient group to confirm the validity of study measurement. In such instances, a supplemental SAP may be a useful tool to describe the statistical principles and methods.

5.3. Factors To Be Considered in the Analysis

Registry results are most interpretable when they are specific to well-defined endpoints or outcomes in a specific patient population with a specific treatment status. Registry analyses may be more meaningful if variations of study results across patient groups, treatment methods, or subgroups of endpoints are reported. In other words, analysis of a registry should explicitly provide the following information:

  • Patient: What are the characteristics of the patient population in terms of demographics, such as age, gender, race/ethnicity, insurance status, and clinical and treatment characteristics (e.g., past history of significant medical conditions, disease status at baseline, and prior treatment history)?
  • Exposure (or treatment): Exposure could be therapeutic treatment such as medication or surgery; a diagnostic or screening tool; behavioral factors such as alcohol, smoking habits, and diet; or other factors such as genetic predisposition or environmental factors. What are the distributions of the exposure in the population? Is the study objective specific to any one form of treatment? Is a new user design being used?51 Does the exposure definition (index and reference group) and analysis avoid immortal-time bias?52 Are there repeated measures or is the exposure intermittent?
  • Endpoints (or outcomes): Outcomes of interest may encompass effectiveness or comparative effectiveness, the benefits of a health care intervention under real-world circumstances,53 and safety—the risks or harms that may be associated with an intervention. Examples of effectiveness outcomes include survival, disease recurrence, symptom severity, quality of life, and cost-effectiveness. Safety outcomes may include infection, sensitivity reactions, cancer, organ rejection, and mortality. Endpoints must be precisely defined at the data collection and analysis stages. Are the study data on all-cause mortality or cause-specific mortality? Is information available on pathogen-specific infection (e.g., bacterial vs. viral)? (See Case Example 27.) Are there competing risks?54
  • Covariates: As with all observational studies, comparative effectiveness research requires careful consideration, collection, and analysis of important confounding and effect modifying variables. For medication exposures, are dose, duration, and calendar time under consideration? Directed acyclic graphs (DAGs) can be useful tools to illustrate how the exposure (or treatment), outcome and covariates are related.55, 56
  • Time: For valid analysis of risk or benefit that occurs over a period of time following therapy, detailed accounting for time factors is required. For exposures, dates of starting and stopping a treatment or switching therapies should be recorded. For outcomes, the dates when followup visits occur, and whether or not they lead to a diagnosis of an outcome of interest, are required in order to take into account how long and how frequently patients were followed. Dates of diagnosis of outcomes of interest, or dates when patients complete a screening tool or survey, should be recorded. At the analysis stage, results must also be described in a time-appropriate fashion. For example, is an observed risk consistent over time (in relation to initiation of treatment) in a long-term study? If not, what time-related risk measures should be reported in addition to or instead of cumulative risk? When exposure status changes frequently, what is the method of capturing the population at risk? Many observational studies of intermittent exposures (e.g., use of nonsteroidal antiinflammatory drugs or pain medications) use time windows of analysis, looking at events following first use of a drug after a prescribed interval (e.g., 2 weeks) without drug use. Different analytic approaches may be required to address issues of patients enrolling in a registry at different times and/or having different lengths of observation during the study period.
  • Potential for bias: Successful analysis of observational studies also depends to a large extent on the ability to measure and analytically address the potential for bias. Refer to Chapter 3, Section 10 for a description of potential sources of bias. Directed acyclic graphs can also be useful for understanding and identifying the source of bias.55, 56 Details and examples of quantification of bias can be found in Chapter 18. For details on how to quantify potential bias, see the textbook by Lash, Fox, and Fink.13

5.3.1. Choice of Comparator

An example of a troublesome source of bias is the choice of comparator. When participants in a cohort are classified into two or more groups according to certain study characteristics (such as treatment status, with the “standard of care” group as the comparator), the registry is said to have an internal or concurrent comparator. The advantage of an internal comparator design is that patients are likely to be more similar to each other, except for their treatment status, than patients in comparisons between registry subjects and external groups of subjects. When defining the comparator group, it is important not to introduce immortal time bias.52 In addition, consistency in measurement of specific variables and in data collection methods make the comparison more valid. Internal comparators are particularly useful for treatment practices that change over time. Comparative effectiveness studies may often necessitate use of an internal comparator in order to maximize the comparability of patients receiving different treatments within a given study, and to ensure that variables required for multivariable analysis are available and measured in an equivalent manner for all patients to be analyzed.

Unfortunately, it is not always possible to have or sustain a valid internal comparator. For example, there may be significant medical differences between patients who receive a particularly effective therapy and those who do not (e.g., underlying disease severity or contraindications), or it may not be feasible to maintain a long-term cohort of patients who are not treated with such a medication. It is known that external information about treatment practices (such as scientific publications or presentations) can result in physicians changing their practice, such that they no longer prescribe the previously accepted standard of care. There may be a systematic difference between physicians who are early adopters and those who start using the drug or device after its effectiveness has been more widely accepted. Early adopters may also share other practices that differentiate them from their later-adopting colleagues.5

In the absence of a good internal comparator, one may have to leverage external comparators to provide critical context to help interpret data revealed by a registry. An external or historical comparison may involve another study or another database that has disease or treatment characteristics similar to those of registry subjects. Such data may be viewed as a context for anticipating the rate of an event. One widely used comparator is the U.S. SEER cancer registry data, because SEER provides detailed annual incidence rates of cancer stratified by cancer site, age group, gender, and tumor staging at diagnosis. SEER represents 28 percent of the U.S. population.57 A procedure for formalizing comparisons with external data is known as standardized incidence rate or ratio;15 when used appropriately, it can be interpreted as a proxy measure of risk or relative risk.

Use of an external comparator, however, may present significant challenges. For example, SEER and a given registry population may differ from each other for a number of reasons. The SEER data cover the general population and have no exclusion criteria pertaining to history of smoking or cancer screening, for example. On the other hand, a given registry may consist of patients who have an inherently different risk of cancer than the general population, resulting from the registry's having excluded smokers and others known to be at high risk of developing a particular cancer. Such a registry would be expected to have a lower overall incidence rate of cancer, which, if SEER incidence rates are used as a comparator, may complicate or confound assessments of the impact of treatment on cancer incidence in the registry.

Regardless of the choice of comparator, similarity between the groups under comparison should not be assumed without careful examination of the study patients. Different comparator groups may result in very different inferences for safety and effectiveness evaluations; therefore, analysis of registry findings using different comparator groups may be used in sensitivity analyses or bias analyses to determine the robustness of a registry's findings. Sensitivity analysis refers to a procedure used to determine how robust the study result is to alterations of various parameters. If a small parameter alteration leads to a relatively large change in the results, the results are said to be sensitive to that parameter. Sensitivity and bias analyses may be used to determine how the final study results might change when taking into account those lost to followup. A simple hypothetical example is presented in Table 13–1.

Table 13–1. Hypothetical simple sensitivity analysis.

Table 13–1

Hypothetical simple sensitivity analysis.

Table 13–1 illustrates the extent of change in the incidence rate of a hypothetical outcome assuming varying degrees of loss to followup, and differences in incidence between those for whom there is information and those for whom there is no information due to loss to followup. In the first example, where 10 percent of the patients are lost to followup, the estimated incidence rate of 111/1,000 people is reasonably stable; it does not change too much when the (unknown) incidence in those lost to followup changes from 0.5 times the observed to 5 times the observed, with the corresponding incidence rate that would have been observed ranging from 106 to 156 per 1,000. On the other hand, when the loss to followup increases to 30 percent, the corresponding incidence rates that would have been observed range from 94 to 242. This procedure could be extended to a study that has more than one cohort of patients, with one being exposed and the other being nonexposed. In that case, the impact of loss to followup on the relative risk could be estimated by using sensitivity analysis. More examples are included in Chapter 18.

5.3.2. Patient Censoring

At the time of a registry analysis, events may not have occurred for all patients. For these patients, the data are said to be censored, indicating that the observation period of the registry was stopped before all events occurred (e.g., mortality). In these situations, it is unclear when the event will occur, if at all. In addition, a registry may enroll patients until a set stop date, and patients entered into the registry earlier will have a greater probability of having an event than those entered more recently because of the longer followup. An important assumption, and one that needs to be assessed in a registry, is how patient prognosis varies with the time of entrance into the registry. This issue may be particularly problematic in registries that assess innovative (and changing) therapies. Patients and outcomes initially observed in the registry may differ from patients and outcomes observed later in the registry timeframe, either because of true differences in treatment options available at different points in time, or because of the shorter followup for people who entered later. Patients with censored data, however, contribute important information to the registry analysis. When possible, analyses should be planned so as to include all subjects, including those censored before the end of the followup period or the occurrence of an event. One method of analyzing censored data to estimate the conditional probability of the event occurring is to use the Kaplan-Meier method.58 In this method, for each time period, the probability is calculated that those who have not experienced an event before the beginning of the period will still not have experienced it by the end of the period. The probability of an event occurring at any given time is then calculated from the product of the conditional probabilities of each time interval.

For information about right censoring and left truncation, please see Chapter 18.

6. Summary of Analytic Considerations

In summary, a meaningful analysis requires careful consideration of study design features and the nature of the data collected. Most typical epidemiological study analytical methods can be applied, and there is no one-size-fits-all approach. Efforts should be made to carefully evaluate the presence of biases and to control for identified potential biases during data analysis. This requires close collaboration among clinicians, epidemiologists, statisticians, study coordinators, and others involved in the design, conduct, and interpretation of the registry.

A number of biostatistics and epidemiology textbooks cover in depth the issues raised in this section and the appropriate analytic approaches for addressing them—for example, “time-to-event” or survival analyses59 and issues of recurrent outcomes and repeated measures, with or without missing data,60 in longitudinal cohort studies. Other texts address a range of regression and nonregression approaches to analysis of case-control and cohort study designs61 that may be applied to registries.

7. Interpretation of Registry Data

Interpretation of registry data is needed so that the lessons from the registry can be applied to the target population and used to change future health care and improve patient outcomes. Proper interpretation of registry data allows users to understand the precision of the observed risk or incidence estimates, to evaluate the hypotheses tested in the current registry, and often also to generate new hypotheses to be examined in future registries or in randomized controlled trials. If the purpose of the registry is explicit, the actual population studied is reasonably representative of the target population, the data quality monitored, and the analyses performed so as to reduce potential biases, then the interpretation of the registry data should allow a realistic picture of the quality of medical care, the natural history of the disease studied, or the safety, effectiveness, or value of a clinical evaluation. Each of these topics needs to be discussed in the interpretation of the registry data, and potential shortcomings should be explored. Assumptions or biases that could have influenced the outcomes of the analyses should be highlighted and separated from those that do not affect the interpretation of the registry results. The use of a comparator of the highest reasonably possible quality is integral to the proper interpretation of the analysis.

Interpretation of registry results may also be aided by comparisons with external information. Examples include rates, or prevalence, of the outcomes of interest in other studies and different data sources (taking into account reasons why they may be similar or different). Such comparisons can put the findings of registry analyses within the context of previous study results and other pertinent clinical and biological considerations as to the validity and generalizability of the results.

Once analyzed, registries provide important feedback to several groups. First analysis and interpretation of the registry will demonstrate strengths and limitations of the original registry design and will allow the registry developers to make needed design changes for future versions of the registry. Another group consists of the study's sponsors and related oversight/governance groups, such as the scientific committee and data monitoring committee. (Refer to Chapter 2, Section 2.6 for more information on registry governance and oversight.) Interpretation of the analyses allows the oversight committees to offer recommendations concerning continued use and/or adaptation of the registry and to evaluate patient safety. The final group consists of the end users of the registry output, such as patients or other health care consumers, health services researchers, health care providers, and policymakers. These are the people for whom the data were collected and who may use the results to choose a treatment or intervention, to determine the need for additional research programs to change clinical practice, to develop clinical practice guidelines, or to determine policy. Ideally, all three user groups work toward the ultimate goal of each registry—improving patient outcomes.

Case Examples for Chapter 13

Case Example 26Using registry data to evaluate outcomes by practice

DescriptionThe Epidemiologic Study of Cystic Fibrosis (ESCF) Registry was a multicenter, encounter-based, observational, postmarketing study designed to monitor product safety, define clinical practice patterns, explore risks for pulmonary function decline, and facilitate quality improvement for cystic fibrosis (CF) patients. The registry collected comprehensive data on pulmonary function, microbiology, growth, pulmonary exacerbations, CF-associated medical conditions, and chronic and acute treatments for children and adult CF patients at each visit to the clinical site.
SponsorGenentech, Inc.
Year Started1993
Year EndedPatient enrollment completed in 2005; followup complete.
No. of Sites215 sites over the life of the registry
No. of Patients32,414 patients and 832,705 encounters recorded

Challenge

Although guidelines for managing cystic fibrosis patients have been widely available for many years, little is known about variations in practice patterns among care sites and their associated outcomes. To determine whether differences in lung health existed between groups of patients attending different CF care sites, and to determine whether these differences were associated with differences in monitoring and intervention, data on a large number of CF patients from a wide variety of CF sites were necessary.

As a large, observational, prospective registry, ESCF collected data on a large number of patients from a range of participating sites. At the time of the outcomes study, the registry was estimated to have data on over 80 percent of CF patients in the United States, and it collected data from more than 90 percent of the sites accredited by the U.S. Cystic Fibrosis Foundation. Because the registry contained a representative population of CF patients, the registry database offered strong potential for analyzing the association between practice patterns and outcomes.

Proposed Solution

In designing the study, the team decided to compare CF sites using lung function (i.e., FEV1 [forced expiratory volume in 1 second] values), a common surrogate outcome for respiratory studies. Data from 18,411 patients followed in 194 care sites were reviewed, and 8,125 patients from 132 sites (minimum of 50 patients per site) were included. Only sites with at least 10 patients in a specified age group (ages 6–12, 13–17, and 18 or older) were included for evaluation of that age group. For each age group, sites were ranked in quartiles based on the median FEV1 value at each site. The frequency of patient monitoring and use of therapeutic interventions were compared between upper and lower quartile sites after stratification for disease severity.

Results

Substantial differences in lung health across different CF care sites were observed. Within-site rankings tended to be consistent across the three age groups. Patients who were cared for at higher-ranking sites had more frequent monitoring of their clinical status, measurements of lung function, and cultures for respiratory pathogens. These patients also received more interventions, particularly intravenous antibiotics for pulmonary exacerbations. The study concluded that frequent monitoring and increased use of appropriate medications in the management of CF are associated with improved outcomes.

Key Point

Stratifying patients by quartile of lung function, age, and disease severity allowed comparison of practices among sites and revealed practice patterns that were associated with better clinical status. The large numbers of patients and sites allowed for sufficient information to create meaningful and informative stratification, and resulted in sufficient information within those strata to reveal meaningful differences in site practices.

For More Information

Johnson C, Butler SM, Konstan MW, et al. Factors influencing outcomes in cystic fibrosis: a center-based analysis. Chest. 2003;123:20–7. [PubMed: 12527598].

Padman R, McColley SA, Miller DP, et al. Infant care patterns at Epidemiologic Study of Cystic Fibrosis sites that achieve superior childhood lung function. Pediatrics. 2007;119:E531–7. [PubMed: 17332172].

Case Example 27Using registry data to study patterns of use and outcomes

DescriptionThe Palivizumab Outcomes Registry was designed to characterize the population of infants receiving prophylaxis for respiratory syncytial virus (RSV) disease, to describe the patterns and scope of the use of palivizumab, and to gather data on hospitalization outcomes.
SponsorMedImmune, LLC
Year Started2000
Year Ended2004
No. of Sites256
No. of Patients19,548 infants

Challenge

RSV is the leading cause of serious lower respiratory tract disease in infants and children and the leading cause of hospitalizations nationwide for infants under 1 year of age. Palivizumab was approved by the U.S. Food and Drug Administration (FDA) in 1998 and is indicated for the prevention of serious lower respiratory tract disease caused by RSV in pediatric patients at high risk of RSV disease. Two additional large retrospective surveys conducted after FDA approval studied the effectiveness of palivizumab in infants, again showing that it reduces the rate of RSV hospitalizations. To capture postlicensure patient demographic outcome information, the manufacturer wanted to create a prospective study that identified infants receiving palivizumab. The objectives of the study were to better understand the population receiving the prophylaxis for RSV disease and to study the patterns of use and hospitalization outcomes.

Proposed Solution

A multicenter registry study was created to collect data on infants receiving palivizumab injections. No control group was included. The registry was initiated during the 2000–2001 RSV season. Over 4 consecutive years, 256 sites across the United States enrolled infants who had received palivizumab for RSV under their care, provided that the infant's parent or legally authorized representative gave informed consent for participation in the registry. Data were collected by the primary health care provider in the office or clinic setting. The registry was limited to data collection related to subjects' usual medical care. Infants were enrolled at the time of their first injection, and data were obtained on palivizumab injections, demographics, and risk factors, as well as on medical and family history.

Followup forms were used to collect data on subsequent palivizumab injections, including dates and doses, during the RSV season. Compliance with the prescribed injection schedule was determined by comparing the number of injections actually received with the number of expected doses, based on the month that the first injection was administered. Infants who received their first injection in November were expected to receive five injections, whereas infants receiving their first injection in February would be expected to receive only two doses through March. Data were also collected for all enrolled infants hospitalized for RSV and were directly reported to an onsite registry coordinator. Testing for RSV was performed locally, at the discretion of the health care provider. Adverse events were not collected and analyzed separately for purposes of this registry. Palivizumab is contraindicated in children who have had a previous significant hypersensitivity reaction to palivizumab. Cases of anaphylaxis and anaphylactic shock, including fatal cases, were reported following initial exposure or re-exposure to palivizumab. Other acute hypersensitivity reactions, which might have been severe, were also reported on initial exposure or re-exposure to palivizumab. Adverse reactions occurring greater than or equal to 10 percent and at least 1 percent more frequently than placebo are fever and rash. In postmarketing reports, cases of severe thrombocytopenia (platelet count <50,000/microliter) and injection site reactions were reported.

Results

From September 2000 through May 2004, the registry collected data on 19,548 infants. The analysis presented injection rates and hospitalization rates for all infants by month of injection and by site of first dose (pediatrician's office or hospital). The observed number of injections per infant was compared with the expected number of doses based on the month the first injection was given. Over 4 years of data collection, less than 2 percent (1.3%) of enrolled infants were hospitalized for RSV. This analysis confirmed a low hospitalization rate for infants receiving palivizumab prophylaxis for RSV in a large nationwide cohort of infants from a geographically diverse group of practices and clinics. The registry data also showed that the use of palivizumab was mostly consistent with the 2003 guidelines of the American Academy of Pediatrics for use of palivizumab for prevention of RSV infections. As the registry was conducted prospectively, nearly complete demographic information and approximately 99 percent of followup information was captured on all enrolled infants, an improvement compared with previously completed retrospective studies.

Key Point

A simple stratified analysis was used to describe the characteristics of infants receiving injections to help prevent severe RSV disease. Infants in the registry had a low hospitalization rate, and these data support the effectiveness of this treatment outside of a controlled clinical study. Risk factors for RSV hospitalizations were described and quantified by presenting the number of infants with RSV hospitalization as a percentage of all enrolled infants who were hospitalized. These data supported an analysis of postlicensure effectiveness of RSV prophylaxis, in addition to describing the patient population and usage patterns.

For More Information

Leader S, Kohlhase K. Respiratory syncytial virus-coded pediatric hospitalizations, 1997-1999. Ped Infect Dis J. 2002;21(7):629–32 [PubMed: 12237593].

Frogel M, Nerwen C, Cohen A, et al. Prevention of hospitalization due to respiratory syncytial virus: Results from the Palivizumab Outcomes Registry. J Perinatol. 2008;28:511–7. [PubMed: 18368063].

American Academy of Pediatrics—Committee on Infectious Disease. Red Book 2003: Policy Statement: Revised indications for the use of palivizumab and respiratory syncytial virus immune globulin intravenous for the prevention of respiratory syncytial virus infections. Pediatrics. 2003;112:1442–6. [PubMed: 14654627].

References for Chapter 13

1.
Sedrakyan A, Marinac-Dabic D, Normand SL, et al. A framework for evidence evaluation and methodological issues in implantable device studies. Med Care. 2010 Jun;48(6 Suppl):S121–8. [PubMed: 20421824]
2.
Cole P. The hypothesis generating machine. Epidemiology. 1993 May;4(3):271–3. [PubMed: 8512992]
3.
Yusuf S, Wittes J, Probstfield J, et al. Analysis and interpretation of treatment effects in subgroups of patients in randomized clinical trials. JAMA. 1991 Jul 3;266(1):93–8. [PubMed: 2046134]
4.
National Cancer Institute. Surveillance Epidemiology and End Results. [August 27, 2012]. http://seer​.cancer.gov.
5.
Schneeweiss S, Gagne JJ, Glynn RJ, et al. Assessing the comparative effectiveness of newly marketed medications: methodological challenges and implications for drug development. Clin Pharmacol Ther. 2011 Dec;90(6):777–90. [PubMed: 22048230]
6.
Rothman K, Greenland S, Lash TL, editors. Modern Epidemiology. 3rd ed. New York: Lippincott Williams & Wilkins; 2008.
7.
Little RJA, Rubin DB. Statistical analysis with missing data. New York: John Wiley & Sons; 1987.
8.
Barzi F, Woodward M. Imputations of missing values in practice: results from imputations of serum cholesterol in 28 cohort studies. Am J Epidemiol. 2004 Jul 1;160(1):34–45. [PubMed: 15229115]
9.
Rubin DB. Proceedings of the Section on Survey Research Methods. American Statistical Association; 1978. Multiple imputations in sample surveys - a phenomenological Bayesian approach to nonresponse; pp. 20–34.
10.
Burton A, Altman DG. Missing covariate data within cancer prognostic studies: a review of current reporting and proposed guidelines. Br J Cancer. 2004 Jul 5;91(1):4–8. [PMC free article: PMC2364743] [PubMed: 15188004]
11.
Greenland S, Finkle WD. A critical look at methods for handling missing covariates in epidemiologic regression analyses. Am J Epidemiol. 1995 Dec 15;142(12):1255–64. [PubMed: 7503045]
12.
Hernan MA, Hernandez-Diaz S, Werler MM, et al. Causal knowledge as a prerequisite for confounding evaluation: an application to birth defects epidemiology. Am J Epidemiol. 2002 Jan 15;155(2):176–84. [PubMed: 11790682]
13.
Lash TL, Fox MP, Fink AK. Applying quantitative bias analysis to epidemiologic data. New York, NY: Springer; 2009.
14.
U.S. Food and Drug Administration; Office of Surveillance and Epidemiology; Center for Drug Evaluation and Research. Standards for Data Management and Analytic Processes in the Office of Surveillance and Epidemiology (OSE). Mar 3, 2008. [August 15, 2012]. http://www​.fda.gov/downloads​/AboutFDA/ReportsManualsForms​/StaffPoliciesandProcedures/ucm082060.pdf.
15.
Swihart BJ, Caffo B, James BD, et al. Lasagna plots: a saucy alternative to spaghetti plots. Epidemiology. 2010 Sep;21(5):621–5. [PMC free article: PMC2937254] [PubMed: 20699681]
16.
Hennekens CH, Buring JE, Mayrent SL. Epidemiology in medicine. 1st ed. Boston: Little, Brown and Company; 1987.
17.
Kleinbaum DG, Kupper LL, Miller KE, et al. Applied regression analysis and other multivariable methods. Belmont, CA: Duxbury Press; 1998.
18.
Aschengrau A, Seage G. Essentials of epidemiology in public health. Sudbury, MA: Jones & Bartlett; 2003.
19.
Rosner B. Fundamentals of biostatistics. 5th ed. Boston: Duxbury Press; 2000.
20.
Higgins J, Green S., The Cochrane Collaboration. The Cochrane Handbook for Systematic Reviews Of Interventions. 2006. [August 15, 2012]. http://www​.cochrane.org​/sites/default/files​/uploads/Handbook4.2.6Sep2006.pdf.
21.
Robins JM, Hernan MA, Brumback B. Marginal structural models and causal inference in epidemiology. Epidemiology. 2000 Sep;11(5):550–60. [PubMed: 10955408]
22.
Mangano DT, Tudor IC, Dietzel C, et al. The risk associated with aprotinin in cardiac surgery. N Engl J Med. 2006 Jan 26;354(4):353–65. [PubMed: 16436767]
23.
Cepeda MS, Boston R, Farrar JT, et al. Comparison of logistic regression versus propensity score when the number of events is low and there are multiple confounders. Am J Epidemiol. 2003 Aug 1;158(3):280–7. [PubMed: 12882951]
24.
Sturmer T, Joshi M, Glynn RJ, et al. A review of the application of propensity score methods yielded increasing use, advantages in specific settings, but not substantially different estimates compared with conventional multivariable methods. J Clin Epidemiol. 2006 May;59(5):437–47. [PMC free article: PMC1448214] [PubMed: 16632131]
25.
Glynn RJ, Schneeweiss S, Sturmer T. Indications for propensity scores and review of their use in pharmacoepidemiology. Basic Clin Pharmacol Toxicol. 2006 Mar;98(3):253–9. [PMC free article: PMC1790968] [PubMed: 16611199]
26.
Schneeweiss S, Rassen JA, Glynn RJ, et al. High-dimensional propensity score adjustment in studies of treatment effects using health care claims data. Epidemiology. 2009 Jul;20(4):512–22. [PMC free article: PMC3077219] [PubMed: 19487948]
27.
Reeve BB, Potosky AL, Smith AW, et al. Impact of cancer on health-related quality of life of older Americans. J Natl Cancer Inst. 2009 Jun 16;101(12):860–8. [PMC free article: PMC2720781] [PubMed: 19509357]
28.
Brodie BR, Stuckey T, Downey W, et al. Outcomes with drug-eluting stents versus bare metal stents in acute ST-elevation myocardial infarction: results from the Strategic Transcatheter Evaluation of New Therapies (STENT) Group. Catheter Cardiovasc Interv. 2008 Dec 1;72(7):893–900. [PubMed: 19016465]
29.
Shuhaiber JH, Kim JB, Hur K, et al. Survival of primary and repeat lung transplantation in the United States. Ann Thorac Surg. 2009 Jan;87(1):261–6. [PubMed: 19101309]
30.
Grabowski GA, Kacena K, Cole JA, et al. Dose-response relationships for enzyme replacement therapy with imiglucerase/alglucerase in patients with Gaucher disease type 1. Genet Med. 2009 Feb;11(2):92–100. [PMC free article: PMC3793250] [PubMed: 19265748]
31.
Brookhart M. Alan. Instrumental Variables for Comparative Effectiveness Research: A Review of Applications. Rockville, MD: Agency for Healthcare Research and Quality; Jan, 2009. [August 14, 2012]. Slide Presentation from the AHRQ 2008 Annual Conference (Text Version) http://www​.ahrq.gov/about​/annualmtg08/090908slides​/Brookhart.htm.
32.
Angrist JD, Imbens GW, Rubin DB. Identification of causal effects using instrumental variables. Journal of the American Statistical Association. 1996;91(434):444–55.
33.
Brookhart MA, Rassen JA, Schneeweiss S. Instrumental variable methods in comparative safety and effectiveness research. Pharmacoepidemiol Drug Saf. 2010 Jun;19(6):537–54. [PMC free article: PMC2886161] [PubMed: 20354968]
34.
Brookhart MA, Schneeweiss S. Preference-based instrumental variable methods for the estimation of treatment effects: assessing validity and interpreting results. Int J Biostat. 2007;3(1) Article 14. [PMC free article: PMC2719903] [PubMed: 19655038]
35.
Hernan MA, Robins JM. Instruments for causal inference: an epidemiologist's dream? Epidemiology. 2006 Jul;17(4):360–72. [PubMed: 16755261]
36.
Merlo J, Chaix B, Yang M, et al. A brief conceptual tutorial of multilevel analysis in social epidemiology: linking the statistical concept of clustering to the idea of contextual phenomenon. J Epidemiol Community Health. 2005 Jun;59(6):443–9. [PMC free article: PMC1757045] [PubMed: 15911637]
37.
Holden JE, Kelley K, Agarwal R. Analyzing change: a primer on multilevel models with applications to nephrology. Am J Nephrol. 2008;28(5):792–801. [PMC free article: PMC2613435] [PubMed: 18477842]
38.
Diez-Roux AV. Multilevel analysis in public health research. Annu Rev Public Health. 2000;21:171–92. [PubMed: 10884951]
39.
Leyland AH, Goldstein H. Multilevel modeling of health statistics. Chichester, UK: John Wiley & Sons, LTD; 2001.
40.
Varadhan R, Segal JB, Boyd CM, et al. A framework for the analysis of heterogeneity of treatment effect in patient-centered outcomes research. Journal of clinical epidemiology. 2013;66(8):818–25. [PMC free article: PMC4450361] [PubMed: 23651763]
41.
Palmer AJ. Health economics--what the nephrologist should know. Nephrol Dial Transplant. 2005 Jun;20(6):1038–41. [PubMed: 15840678]
42.
Neumann PJ. Opportunities and barriers Using cost-effectiveness analysis to improve health care. New York, NY: Oxford University Press; 2004.
43.
Edejer TTT, Baltussen R, Adam T, et al. Making choices in health: WHO guide to cost-effectiveness analysis. Geneva: World Health Organization; 2004.
44.
Drummond M, Stoddart G, Torrance G. Methods for the economic evaluation of health care programmes. 3rd ed. New York: Oxford University Press; 2005.
45.
Muennig P. Designing and conducting cost-effectiveness analyses in medicine and health care. New York: John Wiley & Sons, LTD; 2002.
46.
Haddix AC, Teutsch SM, Corso PS. Prevention effectiveness: a guide to decision analysis and economic evaluation. New York: Oxford University Press; 2003.
47.
Gold MR, Siegel JE, Russell LB, et al. Cost-effectiveness in health and medicine: the Report of the Panel on Cost-Effectiveness in Health and Medicine. New York: Oxford University Press; 1996.
48.
Raftery J, Roderick P, Stevens A. Potential use of routine databases in health technology assessment. Health Technol Assess. 2005 May;9(20):1–92. iii–iv. [PubMed: 15899148]
49.
Salas M, Hofman A, Stricker BH. Confounding by indication: an example of variation in the use of epidemiologic terminology. Am J Epidemiol. 1999 Jun 1;149(11):981–3. [PubMed: 10355372]
50.
Petri H, Urquhart J. Channeling bias in the interpretation of drug effects. Stat Med. 1991 Apr;10(4):577–81. [PubMed: 2057656]
51.
Ray WA. Evaluating medication effects outside of clinical trials: new-user designs. Am J Epidemiol. 2003 Nov 1;158(9):915–20. [PubMed: 14585769]
52.
Suissa S. Immortal time bias in observational studies of drug effects. Pharmacoepidemiol Drug Saf. 2007 Mar;16(3):241–9. [PubMed: 17252614]
53.
Haynes RB, Sackett DL, Guyatt GH, et al. Clinical epidemiology. 3rd ed. New York: Lippincott Williams & Wilkens; 2005.
54.
Andersen PK, Geskus RB, de Witte T, et al. Competing risks in epidemiology: possibilities and pitfalls. Int J Epidemiol. 2012 Jun;41(3):861–70. [PMC free article: PMC3396320] [PubMed: 22253319]
55.
Greenland S, Pearl J, Robins JM. Causal diagrams for epidemiologic research. Epidemiology. 1999 Jan;10(1):37–48. [PubMed: 9888278]
56.
Hernan MA, Hernandez-Diaz S, Robins JM. A structural approach to selection bias. Epidemiology. 2004 Sep;15(5):615–25. [PubMed: 15308962]
57.
National Cancer Institute. SEER: Surveillance Epidemiology and End Results. [August 15, 2012]. http://seer​.cancer.gov​/about/factsheets/SEER_brochure.pdf.
58.
Bland JM, Altman DG. Survival probabilities (the Kaplan-Meier method). BMJ. 1998 Dec 5;317(7172):1572. [PMC free article: PMC1114388] [PubMed: 9836663]
59.
Kleinbaum DG, Klein M. Survival analysis: a self-learning text. 2nd ed. New York: Springer; 2005.
60.
Twisk JWR. Applied longitudinal data analysis for epidemiology – a practical guide. Cambridge, UK: Cambridge University Press; 2003.
61.
Newman SC. Biostatistical methods in epidemiology. New York: John Wiley & Sons, LTD; 2001.

Views

  • PubReader
  • Print View
  • Cite this Page

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...