Discussion

Key Findings and Strength of Evidence

This systematic review updates the 2007 Evidence Report: Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies. Volume 6: Prevention of Healthcare-Associated Infections.3 AHRQ developed the 2007 Evidence Report in response to a 2003 Institute of Medicine (IOM) report, Priority Areas for National Action: Transforming Health Care Quality.1 Although reduction of healthcare-associated infections (HAI) is a top priority, the human and economic burden of these infections remains unacceptably high. Effective preventive interventions are known. The critical questions are how to achieve provider adherence to these preventive interventions and what is the impact of adherence on infection rates?

This report reviews 71 studies (61 articles) of quality improvement (QI) strategies targeting HAI, ten (nine articles) included in the 2007 review and 61 (52 articles) published subsequently. Four HAI were reviewed: central line-associated bloodstream infections (CLABSI), ventilator-associated pneumonia (VAP), surgical site infections (SSI), and catheter-associated urinary tract infections (CAUTI). Study designs consisted of controlled studies, interrupted time series and simple before-after studies. We limited our synthesis to studies that had statistical analyses that adjusted for confounding or secular trend, without which no causal inference can be made about the reported results. The quality characteristics and also strengths and weaknesses of each of the main types of study design are summarized in Table 3 of the Methods Section.

Six categories of QI strategies were used in these 71 studies: audit and feedback; financial incentives, regulation, and policy; organizational change; patient education; provider education; and provider reminder systems. The most frequent QI strategies used were organizational change and provider education—each was used in 55 and 51 studies, respectively. Two QI strategies were rarely reported: financial incentives, regulation and policy; and patient education. Most studies used multiple QI strategies; only 11 studies used a single QI strategy. Outcomes of interest to the review were adherence to various preventive interventions, change in infection rates, and costs and return on investment. Information was also sought on unintended consequences of QI strategies and contextual factors that might influence the success of a strategy, but data were sparse. Of the studies included in the analysis, none were identified that addressed QI strategies to improve adherence to preventive interventions or reduce HAI rates outside the hospital setting. One study focused on efforts to reduce CLABSI among dialysis patients, but it did not attempt to control for confounding or secular trend and therefore was not included in the main analysis. Most comparisons were with usual care; for 13 studies, the comparison was with a period of low-intensity intervention.45,49-60

In analyzing the body of evidence, first we synthesized the evidence within each infection. Then we synthesized results across infections, in an effort to reach stronger and more generalizable conclusions. Evidence synthesis of QI strategies presented considerable challenges. It was not possible to disaggregate the data into individual strategies or to systematically assess the incremental effects of adding a particular strategy to a combination of strategies. Moreover, various combinations of specific strategies were used in the studies, making it challenging to categorize consistent combinations of QI strategies or to compare such combinations with each other.

As discussed in the Results, to develop a workable classification of QI strategy combinations, we hypothesized that organizational change and provider education constitute base strategies. Face validity is the initial rationale for the hypothesis. It is difficult to imagine how any preventive intervention can be implemented without at least some level of organizational change and/or provider education. In fact, 90 percent of studies report at least one of these strategies. Indeed, it is plausible that those studies that did not report use of organizational change or provider education simply took these elements for granted. While this hypothesis is open to debate, the use of these strategies was ubiquitous, so in practical terms, little distinction could be made between those studies that used these two strategies and those that did not.

We, therefore, refer to organizational change, provider education or the combination of both, as base strategies. This simplifying concept allowed us to organize our data into categories of strategies used in combination with the base case. These additional strategies are: (1) audit and feedback plus provider reminder systems, (2) audit and feedback only, (3) provider reminder systems only. This approach mirrors common practice, which relies on bundles of QI strategies, and can therefore potentially yield practical insights.

Key Findings Across Infections

Our key findings, shown in Table 46, assess the evidence across all four infections, applying the framework for grading strength of evidence described in Methods Guide for Effectiveness and Comparative Effectiveness Reviews,43,44 Only studies that reported on both adherence and infection rates are included in our key findings across infections: 30 of the 71 studies (42%). All comparisons are with usual care.

Table 46. Strength of evidence for combinations of QI strategies across healthcare-associated infections.

Table 46

Strength of evidence for combinations of QI strategies across healthcare-associated infections.

  • There is moderate strength of evidence that adherence and infection rates improve when these strategies are used with the base strategies:
    • Audit and feedback plus provider reminder systems
    • Audit and feedback alone
  • There is low strength of evidence that adherence and infection rates improve when this strategy is used with the base strategies:
    • Provider reminder systems alone
  • There is insufficient evidence that the base strategies alone (listed below) improve adherence and infection rates:
    • Organizational change plus provider education
    • Provider education only

We consider these to be our most robust and generalizable findings. Note that the strength-of-evidence analysis describes the evidence for only the specified combination of QI strategies compared with usual care. The conclusions do not imply that one combination is superior to another. We can only describe the strength of evidence that is available for each combination of QI strategies. Furthermore, the finding of moderate strength of evidence, given a heterogeneous incomplete literature, is noteworthy and suggests that these implementation strategies can be effective in reducing HAI, which is the ultimate objective of the QI efforts.

Findings and Strength of Evidence for Each Infection

Table 47 displays moderate-strength findings for each infection. There were no QI strategy combinations for which the strength of evidence was rated high, which is not surprising since these studies are implemented in real world settings and the strongest quasi-experimental designs and statistical analyses often were not used. Studies that reported on adherence rates, infection rates or both were included to assess strength of evidence for QI strategies for each infection. Of the 71 studies, 26 addressed CLABSI, 19 addressed VAP, 15 addressed SSI, and 11 addressed CAUTI. For each infection, studies varied in the adherence rates reported and whether significant improvements were found. Thus, Table 47 shows the specific adherence rates that were improved with each combination of QI strategies.

Table 47. Combinations of QI strategies with moderate strength of evidence for each infection.

Table 47

Combinations of QI strategies with moderate strength of evidence for each infection.

In general, within-infection results concur with the key results across infections displayed in Table 46. There is moderate strength of evidence to support audit and feedback plus provider reminder systems with the base strategies, as well as audit and feedback alone with the base strategies. Two differences are worth noting.

  1. Studies of CLABSI demonstrate the impact of differing approaches to the QI strategy on the outcome. Two studies compared simulation-based provider education with traditional provider education (lecture and/or video-based education).26,28 Both studies found the simulation-based approach to provider education to be superior to the traditional method. This finding may warrant further confirmatory research.
  2. Studies of CAUTI focused on provider reminder systems as the main strategy for reducing duration of urinary catheterization. There was moderate strength of evidence that provider reminder systems alone or used the base strategies improve adherence related to duration of overall urinary catheterization, compared with usual care. This finding was not generalizable to other infections given the current body of evidence.

Alternative interpretations may account for these CLABSI and CAUTI results, which cannot be empirically verified from the evidence available from this review. Simulation-based provider education probably has a greater impact than traditional, more passive teaching techniques. Alternatively, however, simulation may have attributes that are similar to audit and feedback, and may even, under some circumstances, constitute a form of audit and feedback. With respect to CAUTI, might audit and feedback enhance the results of provider reminder systems? Moreover, in the setting of initiating urinary catheterization, which is addressed by only 3 of 11 studies, audit and feedback might be more relevant than provider reminders. These alternative interpretations remind us that it is important to understand the potential synergies among QI strategies and that certain QI strategies may be more effective for some preventive interventions than others. For example, if the preventive intervention is to remove hair from an incision site using scissors rather than clippers, simply removing all scissors from the operating room may be quite effective. In this report, such a change would be designated an organizational change. But if the goal is to have clinical staff use proper sterile techniques when inserting a central line, a checklist—a type of provider reminder system—might be more effective, as well as making sure the tray has the recommended type of antiseptic.

Key Questions With Insufficient Data

As discussed in the results section, there were several questions posed by this report that could not be answered because the data were insufficient. These included the following:

  • How effective are QI strategies in reducing HAI in nonhospital settings, such as ambulatory surgical centers, freestanding dialysis centers, or long-term care facilities,
  • What is the impact of the following QI strategies: patient education; financial incentives, regulation, and policy; and promotion of self-management?
  • What are the savings or costs from the intervention and what is the return on investment related to use of these QI strategies, and
  • How does context impact the outcome and success of the QI strategies?

Findings in Relationship to What Is Already Known

2007 Evidence Report

Authors of the 2007 Evidence Report3 identified several strategies with potential benefit, but for which further research is needed: (1) Printed or computer-based reminders with use of automatic stop orders may reduce unnecessary urethral catheterization. (2) Printed or computer-based reminders may improve adherence to recommendations for timing and duration of surgical antibiotic prophylaxis. (3) Staff education using interactive tutorials (including video and Web-based tutorials) and checklists may improve adherence to insertion practices for placement of central venous catheters. (4) Staff education, including use of interactive tutorials, may improve adherence to interventions to prevent VAP. The report concluded that the evidence for QI strategies to improve preventive interventions for HAI was generally of suboptimal quality, and therefore they were unable to reach firm conclusions.3

Evidence on the results of QI strategies to reduce HAI has shown improvement since the 2007 report. There was improved methodological quality in the included studies of the current report compared with the previous report. Of the 42 studies included in the 2007 report, only 14 (33%) had a control group or more sophisticated statistical analysis than a two-group test. Of the 173 studies included in the current systematic review, 71 (41%) had a control group or more sophisticated statistical analysis. Both the absolute number of studies and the proportion of studies with statistical analysis to control for confounding and secular trend increased. We were therefore able to reach firmer conclusions. We found moderate strength of evidence to support several combinations of strategies across all four infections, and for specific infections.

In addition, the number of relevant publications per year has increased. This trend continued while the systematic review was being prepared. An update of the literature search from April 2011 to January 2012 yielded 40 included articles, compared with 103 articles between January 2006 and April 2011.

The 2007 report concluded that:

Investigators should attempt to perform controlled trials of QI strategies when possible, and should report both adherence rates and infection rates. If performing a controlled trial is impractical, investigators should perform interrupted time series studies, involving reporting data for at least 3 time points before and after the intervention and formal time series statistical analysis.3

We are in complete agreement with the authors' conclusions. While the quality of the literature has improved markedly since 2007, the majority of studies published have designs and statistical analyses that are inadequate to support causal inference. Thus there is potential to mislead clinical and policy decision makers, with resulting harm to patients. Even where no active harm ensues, the opportunity cost of implementing ineffective programs is harm in itself. However, relatively small changes in research design and statistical analysis—such as collecting data for three time points before the intervention and using interrupted time series statistical analysis—could substantially strengthen the body of evidence.

Other Studies and Systematic Reviews

Comparing the results of this systematic review with the published literature is challenging. First, the effectiveness of quality improvement strategies may vary with the context and with the clinical issue being addressed. A number of other studies, including several Cochrane reviews, address efforts to change clinical practice regarding use of preventive services, implementation of guidelines, and prescribing patterns (e.g., Shojania and colleagues,200 Jamal and colleagues,32 Grimshaw and colleagues29). The impact may also vary with the context, and as this report concludes, the usable information available on context remains sparse. Another recent systematic review of the influence of context on the success of QI efforts in health care concludes that the current body of work is in an early stage of development (Kaplan and colleagues201). The present report relies on the concepts developed by a blue ribbon panel of experts and reported in the RAND report.20 The definition and scope of QI strategies also varies (e.g., Scott202 Grimshaw and colleagues29). For example, in this report, provider education is treated as a single entity, in accordance with the categorization used in the 2007 report.3 A report focusing on education might break it down into distribution of educational visits, educational meetings, and educational outreach materials (Grimshaw et al.29). As noted, examining the difference between simulation-based provider education and traditional provider education might also be worthwhile.

Finally, the approaches to analyzing individual QI strategies, such as audit and feedback, vary because they often form part of a bundle of QI strategies. Should the focus be on individual strategies, even if they form part of a bundle of interventions that may vary from study to study? The advantage is the ability to focus on specific components that may be critical to the success of an intervention. The disadvantage is the inability to disentangle the effects of different strategies grouped together. The focus on individual strategies was used in the 2007 report and a number of other studies.3,31 The current report groups bundles of similar strategies, which will help to account for interactions among individual QI strategies. However, because of the large number of different QI strategy combinations, the groupings are not entirely homogeneous, and there are fewer studies per combination. The results are also more challenging to present (e.g., base strategies and audit and feedback or provider reminder systems). Nevertheless, we think this approach produces more valid and generalizable conclusions because it allows for interaction effects to a greater degree. Furthermore, in actual practice, bundles of QI strategies are frequently used.

Focusing on the effectiveness of specific QI strategies, in a Cochrane Review Grimshaw et al.29 examined guideline dissemination and implementation strategies and compared audit and feedback and reminders with other interventions. When comparing audit and feedback alone to no intervention, there were modest improvements in care (modest describes effect sizes >5% and ≤10%). For reminders, there were moderate improvements in care (moderate describes effect sizes >10% and ≤20%). There were fewer evaluations of audit and feedback than reminders. In general, the impact of the QI strategy was larger when a single strategy was compared to a no intervention control group, rather than comparing one multifaceted intervention to another. However, this study is based on literature published before 2000.

In another Cochrane review, Jamtvedt et al.31 focused on audit and feedback. The literature search covered randomized controlled trials up to January 2004. They distinguished among different types of audit and feedback and graded the overall intensity of the strategy. They concluded that audit and feedback can be effective in improving professional practice, but there were studies in which use of audit and feedback had a negative effect. As with other studies, the target of the quality improvement effort varied from prevention to test ordering, prescribing, and general management of care. Their comparisons included any combination of QI strategies in which audit and feedback was included, but they did not find that audit and feedback alone was more effective than when included in a bundle of strategies.

De Vos and colleagues203 conducted a systematic review of controlled studies on the impact of implementing quality indicators in hospitals. They noted that they had not found any overview on implementation and impact of quality indicators in hospitals in general. The article included 21 studies from 1994 to 2008, none of which focused on efforts to reduce HAI. They grouped implementation strategies as follows: educational meeting, educational outreach, audit and feedback, development of a QI plan, and financial incentives. Supporting activities included distribution of educational material, involvement of local opinion leaders, and quality improvement facilities. Most studies used multiple implementation strategies, and the most commonly used strategy for incorporating information on quality indicators was audit and feedback. Process measures were reported more frequently than outcomes. Fourteen of the studies adjusted for potential confounders, and they showed less effectiveness than unadjusted studies did. Studies showing effectiveness or partially effectiveness (defined by the proportion of improved measures) appeared to use audit and feedback together with other implementation strategies. Despite the differences between this article and the current systematic review, the findings appear to be congruent.

The systematic reviews on provider reminder systems tended to focus on specific types of reminder systems, e.g., on-screen point-of-care computer reminders (Shojania et al.200). Given the diversity of provider reminder systems used in the studies included in the current report, the findings for these disparate types of reviews were not compared. One meta-analysis focused on reminder systems to reduce urinary tract infections and urinary catheter use in hospital patients.204 Based on a review of 14 articles published before September 2008, the authors found that the rate of CAUTI fell by 52 percent (p<.001) when reminders or stop orders were used. There was overlap between the studies included in this article and in the current report, but Meddings and colleagues204 appear to have included simple before-after studies. Their overall conclusion is therefore similar to that in the current report, but the size of the effect is likely to be overestimated.

Comparing the results of the current systematic review with other findings echoes the challenges encountered in conducting this review. Specifically, the heterogeneity encountered in articles on implementation of preventive interventions to reduce HAI is magnified in the literature on QI strategies in general. Overall, however, the results of the current review appear to be congruent with those of other studies and systematic reviews. They suggest that improvements in adherence and infection rates may result from use of audit and feedback as well as provider reminder systems.

Applicability

We believe that the results of QI strategies graded moderate strength of evidence are generalizable to other hospital settings. However, there is insufficient evidence to address the extent to which specific contextual factors at an institution influence the success of QI strategies and thus what heterogeneity of outcomes might exist across various hospital settings. The sustainability of a QI strategy is essential to success in clinical practice. Many studies did not measure outcomes for more than 1 year postintervention, which is the minimum needed to evaluate sustainability. Thus the applicability of our results to long term quality improvement is uncertain. Finally, given the paucity of published studies in nonhospital settings, these findings are not applicable to the success of QI strategies in other important health care settings, such as long-term care facilities.

For decision makers, knowledge of costs, benefits and trade-offs of implementing a new program is critical to the decision of whether to adopt a QI strategy. This review did not find evidence related to either downside risk from use of these QI strategies or to the return on investment (ROI) from implementing them. Lack of such evidence to inform decision making may also limit applicability of results.

Limitations of the Present Review

The limitations of this review are those that are generally encountered in assessments of complex interventions that are used in complex settings. Such studies are typically heterogeneous in design, setting, measurement, outcomes, and reporting. The resulting data are not amenable to quantitative analysis, thus requiring a qualitative approach. As noted above, evidence synthesis of QI strategies presented considerable challenges. To develop a workable classification of QI strategy combinations, we hypothesized that organizational change and provider education constitute base strategies and categorized other QI strategies that were combined with organizational change and provider education. As is often the case in qualitative research, the validity of the classification must be demonstrated by its application. Is it a useful way to organize the evidence? Most importantly, and as yet unknown, is the issue of whether the classification can be used prospectively to predict success of QI strategies.

Moreover, this review adopted the existing classification system of QI strategies, with whatever limitations may be inherent in this system. One limitation that is apparent to us is that the same strategy may in fact incorporate very different interventions. For example, as noted above, the different provider education methods may vary in intensity, and thus their potential effect on the outcomes of interest may vary. To this end, the recommendations of Shekelle and colleagues to advance the science of patient safety include “more detailed descriptions of interventions and their implementation.”25

Future Research Needs

This report is a systematic review of the evidence on the use of QI strategies to improve adherence to preventive interventions and to reduce rate of infection. We found both critical methodologic weaknesses in the literature and gaps in evidence to address the Key Questions of our review. The most striking weakness of the literature was the prevalence of deficiencies in study design that precluded causal inference between intervention and reported results. A second weakness was the dearth of systematic collection and reporting of factors that may contribute to the generalizability of QI strategies, that is, information on context. Another weakness was the limited comparability of process measures across studies using the same preventive interventions. There are evidence gaps for the use of QI strategies in nonhospital settings, cost savings and return on investment, and sustainability of results.

Methodologic Weakness: Causal Inference

Studies selected for this systematic review used either an experimental design with a control group or a quasi-experimental design. Most studies of QI strategies are effectiveness studies, rather than efficacy studies. The interventions are implemented in a “real-world” setting rather than using the highly controlled designs that are the standard for efficacy studies.

The factors that can confound the results of such quasi-experimental studies are well known. Unlike most clinical trials, QI studies often do not follow the same patients over time. The patients in the baseline group may be different than those in the postintervention group with respect to their risk of infection. For example, infection risk may be subject to seasonal variation and demographic mix of patients may change. Infection rates may also change over time for reasons unrelated to the QI intervention. The trend may have begun prior to the QI intervention perhaps related to national attention to reducing preventable infections. Also, other QI interventions may be introduced into the institution in overlapping time periods. Among the studies included in this report, most of them either did not explicitly state if the QI strategies were “independent” of other QI efforts or indicated that other QI efforts were introduced. The phenomenon of regression to the mean may account for the more favorable outcomes observed postintervention. While regression analysis and time series analysis can control for confounders and time trends, two-group tests, which are commonly used, cannot.

Although 173 studies met initial selection criteria for this review, 102 were excluded from our synthesis because they used statistical analyses that did not control for confounding or secular trend. While these studies reported an association between QI strategy and outcome, they do not support causal inference and have higher potential to introduce bias into the evidence base. It is more likely that the studies that do not control for confounding will find significant results. Moreover, our classification of studies as using adequate statistical analyses was generous. For example, not all of the studies that used an interrupted time series design used appropriate statistical analysis to evaluate changes due to the intervention. In addition, adjustments related to changes in the patient population were not always completed.

Most publications did not provide analyses of statistical power. Although we limited inclusion to studies with a minimum of 100 participants, infection events could easily be too infrequent in the population to yield sufficient sample to detect a difference. Baseline adherence and infection rates varied markedly among studies. Some studies had high adherence rates and others had low infection rates, resulting in ceiling and floor effects that would make it difficult to detect a statistically significant improvement. In one of the most rigorously designed studies, a cluster randomized trial, imbalances in baseline rates across study arms might explain nonsignificant findings. While a cluster randomized trial is the most rigorous design suited to assessing QI strategies, randomization by institution limits the number of groups allocated to each arm and may result in imbalances between study arms.

To advance the science of using QI strategies to reduce HAI, studies need to demonstrate a causal linkage between improved adherence and reduced infection rate. To evaluate this, studies should report both adherence with the preventive interventions and infection rates. Many studies only reported infection rates, a few reported just adherence; this gives an incomplete picture of the outcome. A few studies presented rates of adherence but did not conduct a statistical analysis.

Although we found no specific suggestion of publication bias, there are some causes for concern. One is the lack of studies reporting negative results. While it is possible that efforts to implement QI measures do no harm, it is also possible that failures are not being reported. Another concern is that the failure to use adequate study design and statistical controls biases toward significantly favorable results, which creates an unwarranted impression of success. Our findings suggest that journals can improve statistical review of QI strategy studies, which would in turn strengthen the quality of evidence available to decision makers and provide an incentive to investigators to conduct more rigorous studies.

Finally the circumstances under which studies of QI strategies are conducted merit a thoughtful approach to improving the development of evidence. Conducting a rigorous evaluation of a complex intervention is a challenging undertaking. Most studies of QI strategies are effectiveness studies, rather than efficacy studies. The usual call to improve the quality of evidence by producing randomized controlled trials may not pertain to this issue. A more productive approach would be to improve the quality of quasi-experimental studies through (1) conducting more rigorous study designs, (2) taking into account secular trends and potential confounders, and (3) reporting and analyzing both adherence and infection rates. The enthusiasm of institutions and institutional collaborations might be harnessed by creating tool kits and accessible consultation so that organizations that are engaged in QI initiatives can make a meaningful contribution to the accumulation of knowledge about successful QI strategies.

Methodologic Weakness: Collection and Reporting of Factors That May Influence Generalizability

Shekelle and colleagues recently proposed a framework to advance the science of patient safety.25 Their recommendations include “greater use of theory and logic models, more detailed descriptions of interventions and their implementation, enhanced explanation of intended and desired outcomes, and better description of measurements of context and how context influences interventions.” Although we abstracted contextual factors from the studies included in this review, the available data were too disparate to be synthesized in a meaningful fashion. This is not surprising, as available studies largely pre-date the dissemination of recommendations to advance the science of patient safety. Presently, the approach to collecting and reporting on factors that may influence generalizability is not sufficiently standardized to produce a robust evidence base. We suggest that availability of tool kits and consultation for organizations undertaking QI evaluation studies could assist this effort.

Above all, however, we caution that efforts to systematize the framework cannot succeed unless pervasive methodologic weaknesses related to causal inferences that we describe above are remedied. The most granular and reproducible descriptions of contextual factors are useless if overlaid on studies where reports of the outcome of QI strategies are unreliable.

Methodologic Weakness: Comparability in Audit of Process Measures

The studies included in the current report had to implement QI strategies that addressed evidence-based preventive interventions. While there is a very clear list of these preventive interventions, the way in which adherence is measured varied greatly from study to study. The inconsistency does reduce the comparability of process measures across studies. Another potential confounder is that studies varied in how preventive interventions were implemented, for example, in the frequency of oral care for ventilated patients or the use of antibiotic-impregnated catheters. Adopting more standardized approaches to measuring adherence would strengthen the body of evidence.

Evidence Gaps

Three key evidence gaps that merit future research.

Only one study, which did not control for confounding or secular trend, was found on the use of QI strategies to reduce HAI in nonhospital settings such as ambulatory surgical centers, freestanding dialysis centers, and long-term care facilities. Yet, a substantial proportion of health care is delivered outside hospitals.

The studies on using QI strategies to reduce HAI were very limited in providing data about the implementation costs, cost savings from the implementation, and return on investment from implementing the QI strategies. The data related to savings are weakened by the number of simple before-after studies that present information on cost-savings when the impact on infections rates is uncertain. One reason for not adopting successful QI strategies is that they are “too expensive,” so the lack of data related to this measure is a major deficiency.

Finally, there are limited data related to the long-term durability and sustainability of the impact of the QI strategies over time. Many studies lasted only 1 year postintervention or less. To eliminate, or at least reduce, HAI, the QI strategies must show sustained effectiveness over several years.

Conclusions

The magnitude of the potential harm caused by HAI and their ubiquity, as well as the recent reduction in infection rates, highlight the importance and feasibility of identifying the most effective ways for healthcare institutions to address their prevention. Although the practical challenges in measuring the effectiveness of different strategies in a real-world environment are many, the results of this systematic review demonstrate that it can be done and that practical lessons can be gleaned even from a less than ideal evidence base. In this update of the 2007 AHRQ report (Ranji and colleagues, 2007)3, there is moderate strength of evidence across all four infections examined that both adherence and infection rates improve when either audit and feedback plus provider reminder systems or audit and feedback alone are added to the base strategies of organizational change and provider education. There is low strength of evidence that adherence and infection rates improve when provider reminder systems alone are added to the base strategies. There is insufficient evidence for reduction of HAI in nonhospital settings, cost/savings for QI strategies, and the nature and impact of the clinical context. Relatively modest improvements in research approaches have the potential to substantially strengthen the evidence and provide further insight into how to protect patients from healthcare-associated infections