NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Shojania KG, Sampson M, Ansari MT, et al. Updating Systematic Reviews. Rockville (MD): Agency for Healthcare Research and Quality (US); 2007 Sep. (Technical Reviews, No. 16.)

Cover of Updating Systematic Reviews

Updating Systematic Reviews.

Show details

3Results

Results of Literature Search and Cohort Screening

Records for 651 potentially eligible systematic reviews were identified through searching. Achieving our target sample size of 100 reviews for the analysis of updating signals required that we assess a total of 325 reviews for eligibility. We screened additional reviews to add a further 50 reviews to the set of reviews in the analysis of publication time lags. (The analysis of time lags was less labor intensive, permitting a larger cohort size for this part of the project). 165 records were excluded on the basis of the ACP Journal Club record, and 60 articles were excluded after assessment of the full article. Exclusion reasons are shown in Figure 2.

Figure 2. Flow of information through eligibility assessment.

Figure

Figure 2. Flow of information through eligibility assessment.

Assessment of the New Evidence

Seventy-seven of the systematic reviews were assessed against new evidence found (if any) through searching and 23 were assessed against an updated systematic review. The updated review could be either an update performed by the authors of the original review, or a newer review on the same topic identified through the search of ACP Journal Club that would itself be eligible for inclusion in the cohort.

Characteristics of Included Studies

Composition of the Cohort

Each review in the cohort of 100 systematic reviews included a median of 13 studies (inter-quartile range: 8 to 21) and 2663 participants (inter-quartile range: 1281 to 8371). We were able to identify at least one new eligible trial for 85 systematic reviews, with a median of 4 new trials (inter-quartile range: 1 to 7) and 1160 new participants (inter-quartile range: 170 to 3689) per review. The five most common clinical content areas for the original systematic reviews were cardiovascular medicine (20), gastroenterology (13), neurology (11), infectious diseases (9), and respiratory system (9). Only 15 of the reviews evaluated the effects of medical devices or procedures; drug therapies provided the focus for the rest of the cohort (Table 1).

Table 1. Characteristics of the Cohort of 100 Systematic Reviews.

Table 1

Characteristics of the Cohort of 100 Systematic Reviews.

Signals for Updating

Of the 100 systematic reviews, a quantitative signal for updating occurred in 30 cases. Qualitative signals for the need to update occurred in 54 cases, including 8 that met criteria for a potentially invalidating change in evidence and 46 that met criteria for a major change. Qualitative signals had their basis in new systematic reviews in 23 cases (including explicit updates in 5), pivotal trials in 25 cases, and other sources in 6 cases (trials discussed in ACP Journal Club or UpToDate and advisory statements issued by the Centers for Disease Control and Prevention or the Food and Drug Administration). The primary event of interest, a quantitative signal involving the primary outcome of the original systematic review or qualitative signal for potentially invalidating or major changes in evidence, occurred for 57 reviews (57%; 95% CI: 47% to 67%) in the cohort (Table 2).

Table 2. Frequency of the Different Types of Signals for Updating.

Table 2

Frequency of the Different Types of Signals for Updating.

Survival Analysis

Using publication date as ‘birth’, median event-free survival (i.e., time without a signal for updating) was 5.5 years (95% CI: 4.6–7.6). However, in 23 cases, signals for updating occurred in less than 2 years, and in 15 cases the signal occurred in less than 1 year. In 7 cases, a signal had already occurred at the time of publication of the original systematic review (in one case, 295 days prior to publication).

In univariate analyses, shorter survival was associated with a clinical content area of cardiovascular medicine (hazard ratio [HR] of 2.58, 95% CI: 1.39 to 4.78; p =0.003), increase in the total number of patients by a factor of 2 or more (HR of 1.79; 95% CI: 1.03 to 3.10; p =0.04), and heterogeneity for at least one outcome in the original systematic review (HR of 1.64, 95% CI: 0.94 to 2.86;p =0.08) (Figures 47). Other potential predictors evaluated, but not found to significantly affect survival included: the number of included patients in the original review, identification of publication bias in the original review, the inclusion of at least one trial published in final 12 months of the search period, the identification of ongoing trials, and the publication type (Cochrane reviews versus those published in peer review journal articles) (Table 3).

Figure 3. Kaplan Meier plot showing the overall event free survival (time without a signal for updating) using publication date as ‘birth’; the immediate drop in survival at time zero reflects the 7 systematic reviews for which signals for updating had already occurred at the time of publication. Symbols represent censored cases.

Figure

Figure 3. Kaplan Meier plot showing the overall event free survival (time without a signal for updating) using publication date as ‘birth’; the immediate drop in survival at time zero reflects the 7 systematic reviews (more...)

Figure 4. Kaplan Meier plot showing the effect on survival of a doubling of the total number of patients (i.e., ratio of new total sample size to old total > 2), which occurred for 25% of systematic reviews in the cohort. Symbols represent censored cases.

Figure

Figure 4. Kaplan Meier plot showing the effect on survival of a doubling of the total number of patients (i.e., ratio of new total sample size to old total > 2), which occurred for 25% of systematic reviews in the cohort. (more...)

Figure 6. Growth of controlled trials, RCTs, systematic reviews and clinical practice guidelines, 1988-2006.

Figure

Figure 6. Growth of controlled trials, RCTs, systematic reviews and clinical practice guidelines, 1988-2006.

Figure 7. Median number of trials and median number of trial participants included in systematic reviews by clinical area.

Figure

Figure 7. Median number of trials and median number of trial participants included in systematic reviews by clinical area.

Table 3. Univariate Survival Analysis.

Table 3

Univariate Survival Analysis.

Table 4 shows the results of multivariate the analysis with adjustment for all variables shown. We also performed stepwise multivariate analysis using a threshold of p≤ 0.1 for variable selection and retention, which resulted in a model in which the following variables predicted decreased survival: clinical content area of cardiovascular medicine (HR of 3.26, 95% CI: 1.71 to 6.21; p =0.0003), heterogeneity in the original systematic review (HR of 2.23, 95% CI: 1.22 to 4.09; p =0.01), and the ratio of the largest new trial to the largest trial from the original review (HR of 1.08, 95% CI: 1.02 to 1.15; p =0.01). Systematic reviews with more than the median of 13 included studies had increased survival (HR of 0.55; 95% CI: 0.31 to 0.98; p =0.04).

Table 4. Multivariate Analysis of Hazards.

Table 4

Multivariate Analysis of Hazards.

In logistic regression analysis no variable significantly affected the risk of a signal for updating occurring within 2 years of publication, though trends towards increased risk were observed for cardiovascular topics (odds ratio of 2.67; 95% CI: 0.88 to 8.1, p =0.08) and an increase in the total number of patients by at least factor of 2 (odds ratio of 2.29; 95% CI: 0.84 to 6.25, p =0.11). A trend towards decreased risk of a signal for updating occurring within 2 years was seen for systematic reviews with more than the median of 13 included studies (odds ratio of 0.38; 95% CI: 0.14 to 1.04; p =0.06). Varying the time period of interest (e.g., predicting a signal for updating within 1 year or 3 years of publication) did not substantially alter the results.

Survival contrasting the significant predictors with the rest of the cohort is illustrated in Figures 5 through 7.

Figure 5. Kaplan Meier plot showing survival by clinical topic area of the original systematic review, stratified by cardiovascular (n=20 reviews) versus all other topics (n=80) Symbols represent censored cases.

Figure

Figure 5. Kaplan Meier plot showing survival by clinical topic area of the original systematic review, stratified by cardiovascular (n=20 reviews) versus all other topics (n=80) Symbols represent censored cases.

When survival analyses were repeated using the end of the search period as ‘birth’, rather than the publication date, the median survival was 6.9 years (95% CI: 6.1 to 9.0), with a median time to a signal for updating of 4.3 years (inter-quartile range: 2.1– 6.4 years). The signal for updating occurred within 1 year of the search in 4 cases, within 2 years of the search in 11 cases and within 3 years of the search in 20. Predictors of increased or decreased survival did not differ from those identified in the analysis that used publication date as ‘birth.’

Directions of Changes in Evidence and Expected Impact on Practice

Of the 18 reviews with changes in statistical significance, 13 involved a gain of statistical significance (i.e., a previously non-significant result became statistically significant) and 5 involved a loss of significance. For the 12 reviews with a relative change in effect size of at least 50%, 3 involved an increase in effect magnitude and 9 involved a decrease in effect. However, because these outcomes could involve harms or benefits, we also characterized the expected impact on practice of the changes that gave rise to the signal for updating. Increases in magnitude of benefit, certainty about benefit, or identification of new patient populations that benefit from the treatment were classified as expected increases in therapeutic application. Decreases in magnitude of benefit, decreased certainty about benefit, findings of increased harm or other limitations on benefit were all classified as leading to decreased therapeutic application. Using such explicit criteria, use of the therapies evaluated would be expected to increase in 19 and decrease in 28 (Table 5).

Table 5. Changes in Certainty and Expected Impacts on Practice Associated with Signals for Updating.

Table 5

Changes in Certainty and Expected Impacts on Practice Associated with Signals for Updating.

We also assessed the impacts on certainty of results due to the changes in evidence that gave rise to signals for updating. We characterized changes in certainty using the 5-point scale that formed the basis for judging characterizations of effectiveness (Appendix A *). This scale included the following categories: definitely effective, probably or possibly effective, uncertain effectiveness, probably or possibly ineffective, and definitely ineffective. When the updated result lay further from the middle position (complete uncertainty) than the original result, we regarded certainty as having increased. Conversely, when the updated result lay closer to the middle position than the original result, we regarded certainty as having decreased. When the updated and original results were equally distant from the middle position (e.g., definitely effective and definitely ineffective), we did not regard certainty as having changed. Such cases would, however, count as impacting therapeutic use. As shown in Table 5, the majority of signals for updating involved increases in certainty (30 reviews) or no changes in certainty (50 reviews).

Search Performance

Across all reviews, 477 new reports were identified as eligible for inclusion to the systematic reviews. Of these, searching identified 430, and 47 were identified by the reviewers from among the studies included in meta-analysis retrieved by the subject search or related item searches. Forty of these nominations (85%) were indexed in MEDLINE, thus the searches retrieved 92% of eligible new studies identified. Forty-three of the 47 missed studies were from systematic reviews where we had searched CENTRAL. Two of these 43 nominations were retrieved by the CENTRAL search.

In 59 cases, a single search strategy would have been sufficient to retrieve all eligible new studies found by any method. Related article RCTs was sufficient in 45 cases, Clinical Query in 34, Core Clinical Journal RCTs in 9, CENTRAL in 14 and Citing RCTs in 3. Systematic reviews with multiple sufficient strategies tended to be those with few new studies. In 68 cases, searching Related Article RCT and Clinical Query would have retrieved all studies either because one strategy or the other was sufficient or because the two together was sufficient.

The median number of records retrieved by the combination of Related Article RCT and Clinical Query by the date at which the signal for updating was detected was 71 (1st and 3rd quartiles; 25, 106). The median number of records retrieved by this combination and assessed as on topic was 7 (1st and 3rd quartiles; 4, 24).

To identify newer quantitative systematic reviews, the Related Article search and the subject search were limited to publication type of meta-analysis. The Related Article search recalled 45% of the meta-analyses found to be on topic and the subject search limited to the meta-analysis publication type identified 66% of the on topic meta-analyses. Of the records assessed by the review team, precision (proportion of assessed records found to be relevant) of the subject search was 0.38 and precision of the Related Article MA search was 0.36.

Performance of the surveillance searches in detecting signaling evidence. There were 27 final RCTs in cohort systematic reviews that were updated by searching and had a qualitative signal of major or notable. Sixteen of these also had a quantitative signal and so formed the basis of the analysis of success in detecting RCTs added to the cumulative meta-analysis.

Six of the 27 final RCTs were by nomination and the remaining 21 were found by the search. Three of the nominations were recent, high profile trials. These were used rather than reviewing the candidate list, thus for the purpose of evaluating search performance. As these occurred after the search date, these three were tested to see if the search would have retrieved them, and whether they cited the cohort systematic review. The remaining 3 nominated final RCTs were identified through meta-analysis. There were 34 targets for studies added to cumulative meta-analysis; 27 were candidates found through searching, 5 were nominations, 2 were meta-analyses found through searching where the individual trial data could not be extracted. One of the nominations was a trial published after the search date, and was manually tested to see which searches would have retrieved it. There were 9 signaling meta-analyses - one was nominated, all others were identified through searching.

Other signaling evidence was used in only 5 reviews that were updated by search. Evidence included FDA advisories and expert opinion from UpToDate, and clinical trials that did not meet the criteria for pivotal trial. Three of these 5 sources were indexed in MEDLINE. Two were identified through searching.

Recall by each search of each type of evidence is shown in Table 6. Retrieval was best for RCT and MA evidence, but the search methods did retrieve some of the other evidence. Overall search performance of final evidence stood at 0.65 for subject search methods, 0.76 for related article methods, 0.55 for CENTRAL and 0.17 for citing references. Across all applications in which the citing reference technique was tested, its strongest performance was in detecting other final evidence, with 0.33 recall. One of the highest recall scores seen in this study was recall of 0.89 for final RCTs found through related article searching. In general, search methods showed somewhat higher recall for final evidence than for all evidence found relevant to the reviews, and the relative performance of the various methods was similar to that seen in the more general context.

Table 6. Recall of Signaling Evidence by the Surveillance Searches.

Table 6

Recall of Signaling Evidence by the Surveillance Searches.

Most information was found through searches. Of 62 pieces contributing to the signal, 57 (92%) were identified through the searches of MEDLINE. The additional material was an included study in a systematic review identified through searching, known to the team, or UpToDate.

Adequacy of MEDLINE coverage for surveillance. While there is general agreement that searching a single database is inadequate for developing the evidence base for systematic reviews,21 the adequacy of MEDLINE for detecting the need to update (surveillance searching) has not been previously examined. We consider the proportion of studies in the original reviews that were indexed in MEDLINE, the survival of those in known updates from this sample, and the proportion of new relevant studies identified from any source that were indexed in MEDLINE.

Original systematic reviews: Of 2065 reports included in the original systematic reviews, 407 (25%) were not indexed in MEDLINE. MEDLINE indexed publications accounted for 89% of total number of participants (N) included in the original systematic reviews, although we could not identify values for N in all cases, and 40% of cases with missing N were for non-indexed studies. For reports where we could identify N, the median size was larger for MEDLINE indexed studies compared with non-indexed studies (116 participants [inter-quartile range: 43–365] vs. 80 participants [40–224]).

Updated Cochrane reviews . Of original Cochrane reviews assessed through an update, 95 studies were indexed in MEDLINE. Of these 95 studies, 4 (4%) were excluded by the author in the update. Among the 56 studies not indexed in MEDLINE, 13 (23%) were excluded in the update (odds ratio 0.145, CI 0.045–0.472), suggesting that material from sources not indexed in MEDLINE may become less important over time.

New studies . The indexing status and number of new studies assessed as eligible for inclusion in the reviews were considered. New studies included candidates identified through searching, nominations found through newer meta-analyses or known to our team, and studies included in explicit updates. Of 590 studies assessed as eligible, 33 (6%) were not indexed in MEDLINE. These reports accounted for 5503 of 648531 new participants (N) identified (1%). All pivotal trials, those RCTs that, by themselves, provided in signal for update, were indexed in MEDLINE (n=19).

Time Lags in the Production and Publication of Systematic Reviews

One hundred and forty-eight reports were included in this analysis, of which 91 (62%) were journal published reviews, 36 (24%) were Cochrane reviews and 21 (14%) were HTA reports. Of HTA reports, 19 (90%) were AHRQ evidence reports. For Cochrane reviews, we used the most recently published version of the Cochrane review.

The median time from last reported search date to indexing was 75 weeks with an inter-quartile range of 52 to 111 weeks. Lag from last search date to publication is shortest for Cochrane reviews (median 31 weeks, inter-quartile range: 22–65) and longest for journal reviews (median 69 weeks; inter-quartile range: 44–92), with technical reports falling in between (median 58 weeks; inter-quartile range: 45–74) (Kruskal Wallis χ2 11.24, p =0.004) For reviews assessed for need of update, 7 were found to have gone out of date by the time of publication.

Intermediate milestones of submission and acceptance dates were reported only for journal published reviews, but reveal what proportion of total preparation time is under the control of investigators. For journal-published reviews where submission and publication dates are known (n=17) median processing time was 41 weeks (inter-quartile range; 29–55 weeks) weeks and where acceptance and publication dates are known (n=55) median processing time was 18 weeks (inter-quartile range; 13–27 weeks). The difference gives some indication of the time taken in peer review.

The 3 journal-published and 6 Cochrane reviews that reported more than one search date showed shorter lags from last search date to publication than those that did not appear to have updated the search. Eight HTA reviews reported updating their search and 11 did not, but the lags from most recent search to publication were essentially the same. Still, there was a significant overall effect by level of search updating (Log Rank (Mantel-Cox), Chi-square 7.253, df=1, p =0.007).

Publication lags were assessed in the main cohort to examine and trends over time. There was an apparent trend towards decreased publication lags over time, with more recent publication dates having shorter publication lags (p =0.12). However, this reflected bias sampling in the sense that the only way for a recent article to be sampled for inclusion in the cohort would be by having a short publication lag. In other words, systematic reviews initiated in, say, 2004, could only end up in the cohort, if they had relatively short delays before publication. To avoid this bias, we analyzed the relationship between publication date and publication lag using only systematic reviews published prior to January 1, 2003. In this analysis, the trend towards shorter publication lags with more recent reviews disappeared completely, with a much smaller regression coefficient and p-value > 0.8.

Publication Velocity

The patterns of evidence accumulation at the macro level (by clinical area), or at the micro level (within a particular systematic review) could help to identify or predict optimal update intervals. Velocity at the macro level is considered here.

The three clinical areas with greatest representation in the cohort are cardiac and cardiovascular disease, neurology and gastroenterology (Table 1). Growth of randomized controlled trials, other controlled trials, appear linear in this time frame (Figure 8, Table 6).

Figure 8. Kaplan Meier plot showing survival stratified by the presence or absence of heterogeneity in the systematic review; statistical heterogeneity was identified as definitely or likely present for at least one outcome in 61 of the 100 reviews. Symbols represent censored cases.

Figure

Figure 8. Kaplan Meier plot showing survival stratified by the presence or absence of heterogeneity in the systematic review; statistical heterogeneity was identified as definitely or likely present for at least one outcome in 61 (more...)

Publication doubling times, when calculated under the assumption of linearity, were consistent across clinical content areas and increased markedly as the time from the series start increased (Table 7). All series shown here begin in 1988. For example, approximately 981 new oncology trials were published in 1988. This number doubled in a little over two years (2.2) in 1990 and will take almost 20 years (18.4) for those studies published in 2005.

Table 7. Linear Fit And Rate Of Growth By Clinical Area, Doubling Time In Years From Various Starting Years Assuming Linear Growth.

Table 7

Linear Fit And Rate Of Growth By Clinical Area, Doubling Time In Years From Various Starting Years Assuming Linear Growth.

Policies and Practices of Agencies or Organizations that Fund or Conduct Systematic Reviews

Respondents. Of the 22 Internet surveys sent by email request, 19 organizations responded yielding an overall response rate of 86%, with 17 groups having completed all mandatory questions. Responding organizations were from the U.S., Canada, U.K., and Australia. The majority of respondent organizations identified themselves as producers of systematic reviews (13/19; 68%) with the remainder presenting as both funder and producer combined (4/19; 21%), or exclusively as funders (2/19; 11%). Of those groups surveyed, all indicated they were not-for-profit, and were predominantly academic institutions (9/18; 50%) or national government agencies (5/18; 28%). Government research or infrastructure grants accounted for the majority of funding as reported by groups (16/18; 89%) followed by non-profit academic or non-governmental organization funding (8/18; 44%), internal funding (6/18; 33%) and industry or private sector funding (6/18; 33%).

Main findings. The majority of organizations indicated they produced systematic reviews for the collective goal of both knowledge and decision support (74%; 14/19), while 21% (4/19) reported producing reviews for decision-support. A large portion of respondents (15/19; 79%) view the importance of updating systematic reviews as high to very high. In spite of this however, most organizations do not have a policy in place for updating (13/19; 68%). Nevertheless, of these groups with no formal update processes, 54% (7/13) indicated establishing a policy was of importance. Of those organizations that reportedly update, 68% (13/19) indicate they do so irregularly. Approximately two thirds (13/19; 68%) of respondents reported that at least 20% of the reviews they commission or produce are out of date, and 32% respondents (6/19) reported that at least 50% of their reviews were out of date. When looking at issues of accountability, respondents specified that funder(s) of the original review (5/19; 26%), authors of the original review (5/19; 26%), and policymakers utilizing the evidence (3/19; 16%) were most responsible for ensuring systematic reviews are updated.

The use of formal methods to determine the need to update a systematic review was reported by 32% (6/19) of groups surveyed, 32% (6/19) reported the use of informal methods while an additional 37% (7/19) reportedly use no methods. When looking more in depth at updating strategies and practices, approximately half of organizations do not engage in regular literature searches to identify new evidence (10/19; 53%). However, of those groups that search periodically, searching frequencies were quite variable, with one group reporting monthly searching; two groups reporting every 12 months; one group indicating every two years; and one group stipulating that searching was dependent upon the stability of the evidence base and the relevance of the topic to their audience. The two most frequently reported strategies used (sometimes, often, or always) to monitor the emergence of new evidence were contacting experts in the field (14/18; 78%) and conducting general literature searches including electronic and hand searches (11/19; 58%). Additional surveillance strategies are listed below in Table 8.

Table 8. Monitoring Strategies.

Table 8

Monitoring Strategies.

When examining updating influences, individuals or groups that reportedly impact most (sometimes, often or always) upon an organization's decisionmaking process of whether to fund or conduct an update are as follows: external policymakers (16/19; 84%); the organization itself as the funder of the systematic review (15/19; 79%); experts in the field (13/19; 68%); and authors of the original review (13/19; 68%). Statisticians (1/19; 5%) and information specialists (3/19; 16%) were least likely to impact this decision. We also note that 26% of groups surveyed indicated that patients or consumer groups ‘sometimes’ influence this decisionmaking process. When assessing specific issues that may factor into determining ‘when’ to update, a formal request from a policy or healthcare decisionmaker is the most frequently cited factor by the majority of respondents (16/19; 84%) followed by the totality of all new evidence under consideration (13/17; 76%). See Table 9 for additional impact factors.

Table 9. Factors that Impact on Determining “When” to Update.

Table 9

Factors that Impact on Determining “When” to Update.

Additional updating influences include the notion that updating will have an effect on clinical practice (15/18; 83%), policy (13/18; 72%), organizational credibility of being current (13/18; 72%), current public controversy or interest (12/18; 67%), or cost utility of updating (12/18; 67%)

Data collected indicates that 60% (9/16; 56%) of respondents spend over 3 months of effort per review on activities related to updating systematic reviews, and 36% (6/16) reported expending over 6 months on updating. When looking closer at type of updating involvement, 72% of groups (13/18) report having ‘sometimes’ or ‘often’ been involved in doing full updates of all sections of a review. Two-thirds of respondents report ‘often’ or ‘sometimes’ having been involved in partial updates involving only certain sections of original reviews, while 61% of the groups (11/18; 59%) report having been involved in conducting an entirely new review upon updating. Only 1 of 18 respondents (6%) reported ever having discussed the need for a future update in the text of a systematic review. One third (5/17; 29%) of groups have withdrawn at least one systematic review from circulation after assessing the review as out of date. Approximately, 78% of organizations (14/18) reported they are ‘often’ or ‘sometimes’ able to draw on the same people involved in the original review. When asked if they had been involved in updating systematic reviews done by others, 61% (11/18) of respondents indicated they ‘seldom’ or ‘never’ done this, six groups ‘sometimes’ had, and only one group reported ‘often’ updating reviews done by others.

From the data gathered it would seem that most organizations are seldom or not utilizing current existing methods, such as cumulative meta-analytic approaches, when undertaking updating. The most frequently used approach is the time-based approach implying a pre-set updating frequency (7/18; 39%). (See Table 10.)

Table 10. Methods/Procedures.

Table 10

Methods/Procedures.

Table 11. Major/Moderate Benefits to Harmonization.

Table 11

Major/Moderate Benefits to Harmonization.

Identifying recent literature published after the date of the last search but before completion of the final systematic review is quite common among those surveyed with 94% (17/18) of organizations reporting this happens ‘sometimes’ or ‘always’. Organizations also report that this information is usually incorporated as an addendum in the review (11/18; 61%), or as a formal revision to the analysis (9/18; 50%).

Updating Barriers. Several elements of original systematic reviews were identified as moderate to serious barriers when updating as reported by respondents including the perceived need to redo data extraction (11/18; 61%); to change the original screening questions (9/18; 50%); to re-assess study quality (9/18; 50%) and to change the original search strategy (8/18; 44%). Further, respondents identified more broad-spectrum barriers (moderate to serious) to updating including limited funding and resources (17/18; 94%); limited academic credit for updating work (11/18; 61%); and limited publishing formats (9/18; 50%). With knowledge of the aforementioned barriers, it should be noted that 72% (13/18) of organizations reported knowing a systematic review was out of date but were not able to commence updating due to lack of resources (e.g. funding, personnel, time).

Harmonization . By harmonization we mean that different groups involved in the funding, conduct, or reporting of systematic reviews would come together and harmonize on issues of conduct, reporting and policy as it relates to updating systematic reviews. A large portion of respondents (11/19; 58%) indicated they ‘somewhat’ or ‘strongly’ support centralizing updating efforts across institutions or agencies that produce systematic reviews (i.e., harmonizing updating efforts). There were several perceived benefits (moderate to major) to participating in international harmonization efforts for updating with the foremost being the use of existing resources more efficiently (15/18; 83%). See Table 8 for a list of additional benefits.

Respondents also indicated several barriers to harmonization, including the possible diversion of an organization's funding resources (15/17; 88%) and insufficient human resources (14/17; 82%). As well, 76% of those surveyed (13/17) viewed perceived delays in working across organizations and possibly diverting the focus of research mandates within organizations (8/17; 47%) as moderate to serious barriers to collaboration. Obstacles aside, 84% (16/19) of the sample indicated they ‘somewhat’ to ‘strongly’ favored the development of a central registry of systematic reviews, which would be similar to efforts within the clinical trials community.

Appendixes cited in this report are available electronically at http://www​.ahrq.gov/clinic/tp/sysrevtp​.htm.

Footnotes

*

Appendixes cited in this report are available electronically at http://www​.ahrq.gov/clinic/tp/sysrevtp​.htm.

PubReader format: click here to try

Views

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...