NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Cover of Updating Systematic Reviews

Updating Systematic Reviews

Technical Reviews, No. 16

Investigators: , MD, , MLIS, , MBBS, MMedSc, MPhil, , MD, MHA, , BA, DCS, MSc(c), , MLIS, and , PhD.

University of Ottawa Evidence-based Practice Center, Ottawa, Canada
Rockville (MD): Agency for Healthcare Research and Quality (US); .
Report No.: 07-0087

Structured Abstract


Systematic reviews are often advocated as the best source of evidence to guide both clinical decisions and healthcare policy, yet we know very little about the extent to which they require updating.


To estimate the average time to changes in evidence sufficiently important to warrant updating systematic reviews (referred to as the survival time) and to identify any characteristics that increase or decrease these survival times.

To determine the performance characteristics of various surveillance protocols to identify important new evidence.

To assess the utility of rates and patterns of growth for evidence within clinical areas as predictors of updating needs.

To establish typical timeframes for the production and publication of systematic reviews in order to assess the extent to which they impact survival time (e.g., whether or not delays in the peer review and publication processes substantially shorten the time in the public domain before new evidence requires updating of a given systematic review).

To characterize current updating practices and policies of agencies that sponsor systematic reviews.


Survival analysis for a cohort of 100 quantitative systematic reviews that were indexed in ACP Journal Club with an accompanying commentary; supplementary sample of Cochrane reviews meeting the same criteria and AHRQ evidence reports; internet-based survey of agencies that sponsor or undertake systematic reviews.


Eligible reviews evaluated the clinical benefit or harm of a specific (class of) drug, device, or procedure, were originally published between 1995 and 2005, and included at least one quantitative synthesis result in the form of an odds ratio, relative risk, risk difference, or mean difference. For the survey of updating policies and practices, we contacted 22 organizations that are well-known to produce or fund systematic reviews (including 12 AHRQ Evidence-based Practice Centers).

Data sources:

Systematic reviews indexed in ACP J Club and eligible new trials identified through five search protocols.


Quantitative signals for updating consisted of changes in statistical significance or a relative change in effect magnitude of at least 50 percent involving one of the primary outcomes of the original systematic review or any mortality outcome. These signals were assessed by comparing the original meta-analytic results with updated results that included eligible new trials. Qualitative signals included substantial differences in characterizations of effectiveness, new information about harm, emergence of superior alternative treatments, and important caveats about the previously reported findings that would affect clinical decisionmaking.

The primary outcome of interest was the occurrence of either a qualitative or quantitative signal for updating the original systematic review. We also assessed the occurrence of a signal for updating within 2 years of publication, as some sources (e.g., The Cochrane Library) currently recommend updating systematic reviews every two years.

The survey measured existing updating policies, current strategies in use, and additional perceptions related to the updating process from the 18 organizations that responded.


The cohort of 100 systematic reviews included a median of 13 studies (inter-quartile range: 8 to 21) and 2663 participants (inter-quartile range: 1281 to 8371) per review. A qualitative or quantitative signal for updating occurred for 57 systematic reviews. Median survival free of a signal for updating was 5.5 years (95% confidence interval [CI]: 4.6–7.6), but in 23 cases (95% CI: 15% to 33%), a signal for updating occurred in less than 2 years, and in 15 cases (95% CI: 9% to 24%) the signal occurred in less than 1 year. In 7 cases (95% CI: 3% to 14%), a signal had already occurred at the time of publication of the original review. Shorter survival was associated with cardiovascular medicine (hazard ratio of 3.26, 95% CI: 1.71 to 6.21; p =0.0003), heterogeneity in the original review (hazard ratio of 2.23, 95% CI: 1.22 to 4.09; p =0.01), and having a new trial larger than the previous largest trial (hazard ratio of 1.08, 95% CI: 1.02 to 1.15; p =0.01). Systematic reviews with more than the median of 13 included studies had increased survival (hazard ratio of 0.55; 95% CI: 0.31 to 0.98; p =0.04). No feature of the original review significantly predicted a signal for updating occurring within 2 years of publication.

Median time from the final search date to indexing 1.4 years (inter-quartile range; 0.96–2.0 years). Lags from search to publication were shortest for Cochrane reviews (median 0.6 years, inter-quartile range: 0.42–1.25) and longest for journal reviews (median 1.3 years; inter-quartile range: 0.84–1.77), with technical reports falling in between (median 1.1 years; inter-quartile range: 0.87–1.42) (Kruskal Wallis χ2 11.24, p =0.004).

Of the five search protocols tested for their effectiveness in identifying eligible new trials, the combination with the highest recall and lowest screening burden were the strategy that used the PubMed Related Articles feature (applied to the three newest and three largest trials included in the original review) and the strategy involved submitting a subject search (based on population and intervention) to the Clinical Query filter for therapy. This combination identified most new signaling evidence with median screening burden of 71 new records per review.

For the survey of organizations involved in producing or funding systematic reviews, we received responses from 19 (86%) of the 22 organizations contacted. Approximately two thirds (68%) of respondents identified themselves as producers of systematic reviews and an additional 21% identified themselves as both funders and producers of systematic reviews. Only two respondents (11%) characterized themselves solely as funders of systematic reviews.

Approximately 80% of respondents characterized the importance of updating as ‘high’ or ‘very high’, although 68% acknowledged not having any formal policies for updating in place. Approximately two thirds (13/19; 68%) of respondents reported that over 20% of the reviews they commission or produce are out of date, and 32% respondents (6/19) reported that at least 50% of their reviews were out of date. Barriers to updating identified by respondents included lack of appropriate methodologies, resource constraints, lack of academic credit, and limited publishing formats. The majority of the sample (16/19; 84%) indicated they ‘somewhat’ to ‘strongly’ favor the development of a central registry, analogous to efforts within the clinical trials community, to coordinate updating activities across agencies and review groups.


In a cohort of high quality systematic reviews directly relevant to clinical practice, signals for updating occurred frequently and within relatively short timelines. A number of features significantly affected survival, but none significantly predicted the need for updating within 2 years.

Currently, definitive methods about the frequency of updating cannot be made. Blanket recommendation such as every two years will miss a substantial number of important signals for updating that occur within shorter time lines, but more frequent updates will expend substantial resources. Methods for identifying reviews in need of updating based on surveillance for new evidence hold more promise than relying on features of the original review to identify reviews likely to need updating within a short time, but such approaches will require further investigation. Several of the methods tested were feasible, yielding good recall of relevant new evidence with modest screening burdens.

The majority of organizations engaged in the funding or production of systematic reviews view the importance of updating systematic reviews as high to very high. Despite this recognition, most organizations report having no formal policy in place for updating previous systematic reviews. Slightly less than half of organizations performed periodic literature searches to identify new evidence, but searching frequencies varied widely, from monthly to every two years.

If systematic reviews are to achieve their stated goal of providing the best evidence to inform clinical decision making and healthcare policy, issues related to identifying reviews in need of updating will require much greater attention. In the meantime, publishers of systematic reviews should consider a policy of requiring authors to update searches performed over 12 months prior to submission. And, users of systematic reviews need to recognize that important new evidence can appear within short timelines. When considering the results of a particular systematic review, users should search for more recent reviews or trials to see if any exist and determine if the results are consistent with the previous review.


Investigator: Steve Doucette, MSc.

Prepared for: Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services.1 Contract No. 290-02-0021. Prepared by: University of Ottawa Evidence-based Practice Center, Ottawa, Canada.

Suggested citation:

Shojania KG, Sampson M, Ansari MT, Ji J, Garritty C, Rader T, Moher D. Updating Systematic Reviews. Technical Review No. 16. (Prepared by the University of Ottawa Evidence-based Practice Center under Contract No. 290-02-0017.) AHRQ Publication No. 07-0087. Rockville, MD: Agency for Healthcare Research and Quality. September 2007.

No investigators have any affiliations or financial involvement (e.g., employment, consultancies, honoraria, stock options, expert testimony, grants or patents received or pending, or royalties) that conflict with material presented in this report.

This report is based on research conducted by the University of Ottawa Evidence-based Practice Center (EPC) under contract to the Agency for Healthcare Research and Quality (AHRQ), Rockville, MD (Contract No. 290-02-0021). The findings and conclusions in this document are those of the author(s), who are responsible for its content, and do not necessarily represent the views of AHRQ. No statement in this report should be construed as an official position of AHRQ or of the U.S. Department of Health and Human Services.

The information in this report is intended to help clinicians, employers, policymakers, and others make informed decisions about the provision of health care services. This report is intended as a reference and not as a substitute for clinical judgment.

This report may be used, in whole or in part, as the basis for the development of clinical practice guidelines and other quality enhancement tools, or as a basis for reimbursement and coverage policies. AHRQ or U.S. Department of Health and Human Services endorsement of such derivative products may not be stated or implied.


540 Gaither Road, Rockville, MD 20850. www‚Äč

Bookshelf ID: NBK44099PMID: 20734512


Related information

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...