NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Norris SL, Holmer HK, Ogden LA, et al. Selective Outcome Reporting as a Source of Bias in Reviews of Comparative Effectiveness [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2012 Aug.
Selective Outcome Reporting as a Source of Bias in Reviews of Comparative Effectiveness [Internet].
Show detailsReporting biases arise when the dissemination of research findings is influenced by the nature and direction of the results,1 and can arise from processes acting within a study or at the level of the whole study. Within studies, researchers may report their findings selectively—choosing to report selected outcomes and analyses based on the results. Reporting bias can thus result from selective outcome reporting (SOR), wherein only a subset of the original outcomes measured and analyzed in a study are fully reported based on the magnitude of the treatment effect or the statistical significance of selected outcomes.2
Kirkham and colleagues describe three main types of SOR:3 selective reporting of an entire study outcome (i.e., analyzed outcomes are not reported); selective reporting of a specific outcome (e.g., selected followup intervals), and incomplete reporting of a specific outcome (e.g., incomplete reporting of nonsignificant p values, such as p>0.05). SOR can result in outcome reporting bias (ORB), which is the bias produced from choosing which outcomes to publish based on the results.2
Reporting bias arising from within-study processes can also result from the selection of analyses for reporting (SAR), which can lead to analysis reporting bias (ARB). Examples of SAR include selective reporting of data on subgroups, presentation of adjusted rather than unadjusted analyses, selection of as-treated rather than intention-to-treat analyses, selective approaches to the handling of missing data, choosing to analyze continuously measured variables categorically (outcomes or predictors in adjusted models), and choice of cut-point values to define categorical variables.4
The high prevalence of SOR and SAR among primary studies is well documented. This research has been done almost exclusively in randomized controlled trials (RCTs), usually by comparing study protocols submitted to regulatory or funding agencies with published outcomes.5–7 In a systematic review of five such cohorts (four of which contained only RCTs), Dwan and colleagues6 reported that changes in prespecified outcomes occurred that were not documented in protocol amendments in 40 to 62 percent of studies, where there was at least one primary outcome that was changed, introduced, or omitted between the protocol and the publication. In addition, statistically significant outcomes had a higher odds of being fully reported compared to nonsignificant outcomes (range of odds ratios [OR], 2.2 to 4.7), suggesting ORB as well as SOR.
There are few studies on the prevalence of SOR and SAR among trials included in systematic reviews, and little is known about the effects of selective reporting on effect estimates and conclusions in such reviews. Kirkham and colleagues3 compared effect estimates reported in meta-analyses to estimates obtained with sensitivity analyses estimating the same effects without SOR (using the maximum bias bound approach8) for a sample of new systematic reviews published in the Cochrane Library. Of 81 reviews with a single meta-analysis of the review primary outcome, 52 (64 percent) included one or more RCTs with a high suspicion of ORB. Of 25 reviews that could be assessed, the median percentage change in treatment effect between the reported effect and the estimated effect without SOR was 39 percent (interquartile range 18 to 67 percent). Of 42 meta-analyses with statistically significant results, 19 percent became non-significant after adjustment for ORB and 26 percent overestimated the treatment effect by 20 percent or more. Hart and coauthors9 reanalyzed meta-analyses of drug efficacy and harms, adding unpublished data from the U.S. Food and Drug Administration (FDA), and reported a change in the assessment of efficacy of the drug in 92 percent of the meta-analyses.
Kirkham and colleagues3 developed a classification system for SOR, called Outcome Reporting Bias in Trials (ORBIT) (Table 1). This nine-category assessment tool is based on information in the trial publication(s) only and not on other information such as that contained in trial registries. This system focuses on outcomes that are missing or incompletely reported in reports of RCTs, and differentiates types of SOR based on the assessor’s certainty about whether the outcome was measured and analyzed, and the potential reasons for missing data.
Table 1
The Outcome Reporting Bias In Trials (ORBIT) study classification system for missing or incomplete outcome reporting in reports of randomized trials.
In the design stage of a study, outcomes can be selected based on anticipated results, and these selected outcomes can be then specified in the study protocol. By definition this is not SOR as the selection of outcomes is not based on actual results; however this approach to design of primary studies can ultimately lead to biased results and conclusions in systematic reviews.
Publication bias, whereby an entire study is not published because of the nature or direction of the results,10, 11 is also an important issue for systematic reviewers. Statistically significant results are more likely to be published than studies with “negative” or “null” findings,12 and positive findings are more likely to be published rapidly,13, 14 in English, with multiple companion papers, in high impact journals, and to be cited by others.5, 15 In this report we focus exclusively on the less well studied and recognized issues of within-study selective reporting, specifically SOR and SAR, and do not examine publication bias.
Systematic reviewers should assess the risk of all potential biases in included primary studies. Given that there are emerging data suggesting the presence of SOR and SAR, systematic reviewers need to consider the potential bias due to missing outcomes or analyses among the primary studies included in a review. In addition, review authors need to consider how SOR and SAR might affect the direction, magnitude, and precision of pooled effect estimates, as well as the conclusions about both benefits and harms in systematic reviews.
There are no data that we are aware of on the effects of SOR and SAR in reviews of comparative effectiveness and it is possible that selective reporting (SOR and/or SAR) has different frequencies and implications across various types of systematic reviews, interventions, and outcomes. For example, the availability of protocols may vary among types of interventions (e.g., drug vs. behavioral therapy) and studies (e.g., effectiveness vs. efficacy). In addition, some of the characteristics of comparative effectiveness research may affect the frequency and impact of SOR and SAR: comparative effectiveness reviews are more likely to include subjective measures of patient-important outcomes (e.g., symptoms, quality of life), head-to-head rather than placebo-controlled studies, and heterogeneity of populations and interventions. Evidence on selective reporting across various study designs and outcomes may assist in the interpretation of summary effect measures and conclusions in reviews of comparative effectiveness.
The registration of studies, particularly RCTs, is an important tool for identifying all studies related to key question in a comparative effectiveness review (CER). Registries are also a potential tool for assessing SOR and SAR. In the United States, the U.S. Food and Drug Administration Modernization Act of 1997 called for the creation of ClinicalTrials.gov and mandated registration of all efficacy drug trials for serious or life-threatening diseases and conditions conducted under FDA Investigational New Drug Application regulations.16 Each record in ClinicalTrials.gov includes summary information on the study protocol, patient recruitment status, and the location of the study site. Beginning in September, 2008, the FDA requires that results also be reported in clinicalTrials.gov, although some exceptions are permitted.17
The World Health Organization (WHO) initiated a policy in 2006 requiring trial registration of all medical studies that test treatments on patients or healthy volunteers.18 WHO developed the International Clinical Trials Registry Platform (ICTRP), a global initiative that aims to make information about all clinical trials involving humans publicly available (www.who.int/ictrp/network/primary/en/index.html).18 The ICTRP operates a Search Portal, which provides access to information about ongoing and completed clinical trials from a number of different trial registries (See Appendix A).
Objectives
This exploratory study set about to examine the frequency and effect of reporting biases, specifically SOR and SAR, in reviews of comparative effectiveness. This work focused specifically on using trial registries as a potential tool for assessing SOR and SAR. The goal of this study was to inform ongoing work in AHRQ’s Evidence-based Practice Center (EPC) program to develop valid and efficient approaches and procedures for identifying SOR and SAR in studies included in systematic reviews, and to assess the risk of bias due to missing data in CERs.
We defined outcomes rather broadly, in order to encompass a change in outcome specification (e.g., followup interval or continuous to categorical variable) in our examination of SOR. We also wanted to examine the prevalence of the addition of outcomes (that were not prespecified) to a publication. We did not focus on the type of SOR where outcomes were missing completely from a publication which likely could or should have reported them, as exploration of that type of SOR would have markedly increased the scope of our project.
The specific objectives of this task order were to:
- describe the frequency of SOR and SAR within primary studies included in reviews of comparative effectiveness for outcomes of benefit;
- explore potential predictors of SOR and SAR in RCTs; and
- assess the reliability and validity of the ORBIT3 classification system for missing or incomplete outcome reporting and the ORBIT assessment of the risk of bias associated with different types of SOR.
- Background - Selective Outcome Reporting as a Source of Bias in Reviews of Compa...Background - Selective Outcome Reporting as a Source of Bias in Reviews of Comparative Effectiveness
- Multivitamin/Mineral Supplements and Prevention of Chronic DiseaseMultivitamin/Mineral Supplements and Prevention of Chronic Disease
- Drug Class Review: Constipation DrugsDrug Class Review: Constipation Drugs
- Prostate-Specific Antigen-Based Screening for Prostate CancerProstate-Specific Antigen-Based Screening for Prostate Cancer
- The complement system and innate immunity - ImmunobiologyThe complement system and innate immunity - Immunobiology
Your browsing activity is empty.
Activity recording is turned off.
See more...