NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Ranji SR, Shetty K, Posley KA, et al. Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 6: Prevention of Healthcare–Associated Infections). Rockville (MD): Agency for Healthcare Research and Quality (US); 2007 Jan. (Technical Reviews, No. 9.6.)

Cover of Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 6: Prevention of Healthcare–Associated Infections)

Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 6: Prevention of Healthcare–Associated Infections).

Show details

2Methods

Scope

This report focuses on healthcare-associated infections contracted in acute care hospitals. Specifically, we focused on prevention of four types of infections that collectively account for more than 80 percent of all HAIs in hospitals10: surgical site infections (SSI), central line-associated bloodstream infections (CLABSI), ventilator-associated pneumonia (VAP), and catheter-associated urinary tract infection (CAUTI). Prevention of these HAIs has become increasingly important not only due to the burden of disease, but because an increasing number of states are mandating or considering mandating public reporting of rates of some or all of these infections. National organizations such as the Centers for Disease Control and Prevention5 and Institute for Healthcare Improvement7 also recommend focusing on prevention of these HAIs as a high-impact method of reducing iatrogenic morbidity and mortality.

Definitions of QI Terms Used in This Report

We used quality improvement terminology in accordance with prior volumes of the Closing the Quality Gap series, as follows:

  • Quality gap: The difference between health care processes or outcomes observed in practice and those potentially achievable on the basis of current professional knowledge The difference must be attributable in whole or in part to a deficiency that could be addressed by the health care system.
  • Quality improvement strategy: Any intervention strategy aimed at reducing the quality gap for a group of patients representative of those seen in routine practice.
  • Quality improvement target: The outcome, process or structure that the QI strategy targets, with the goal of reducing the quality gap.

Classification of Interventions and Quality Improvement Strategies

The intervention(s) used in a study sometimes included more than one QI strategy. Each intervention was characterized in terms of the QI strategy (or strategies) employed. Interventions containing two or more different QI strategies (as defined by the categorization listed below) were considered multifaceted interventions. For example, an intervention using (a) audit and feedback and (b) clinician education was defined as a multifaceted intervention, using two QI strategies.

We used a taxonomy of quality improvement strategies as defined in previous volumes of the series (Table 3).

Table 3. Quality improvement strategies.

Table 3

Quality improvement strategies.

For the purposes of this report, each intervention-control comparison within a study was abstracted separately. Thus, if an article reported a three arm trial, in which distinct interventions were delivered to participants in two of the arms and the third arm constituted a control group, then we considered such a study to contain two trials (the separate comparisons of the two intervention arms against the control group).

Targeted Preventive Interventions

We defined a “preventive intervention” as a specific infection control practice that has been demonstrated to reduce the incidence of a HAI. For each of our target HAIs, a wide variety of preventive interventions have been evaluated. We chose to focus on the implementation of preventive interventions that are recommended for universal use in target patient populations by professional societies and governmental organizations. We selected these target preventive interventions by reviewing evidence-based HAI prevention guidelines compiled by authorities in the field. Specifically, we reviewed the CDC guidelines for prevention of surgical site infection (1999),13 prevention of intravascular catheter-related infections (2002),24 prevention of healthcare-associated pneumonia (2003),32 and prevention of catheter-associated urinary tract infection (1983).40 In order to obtain the most current information on recommended preventive interventions, we also reviewed the 2005 Surgical Care Improvement Project41 recommendations, the 2005 American Thoracic Society/Infectious Disease Society of America guidelines for the management of patients with healthcare-associated pneumonia,31 the recommendations of the Institute for Healthcare Improvement's “100,000 Lives” campaign, and solicited input from our peer review panel.

We primarily considered for inclusion preventive interventions that received a grade of IA (strongly recommended for implementation and supported by well-designed experimental, clinical, or epidemiological studies) or IB (strongly recommended for implementation and supported by some experimental, clinical, or epidemiological studies and strong theoretical rationale) from the CDC prevention guidelines, or an equivalent rating from another professional society guideline. We emphasized interventions that were broadly applicable to as large a patient population as possible, had a strong evidence base, had a known quality gap (i.e., a suboptimal rate of use in practice had been documented), and whose use was potentially modifiable through patient-, provider- or system-focused QI strategies. Given this focus, we did not address implementation of some strategies whose utility remains controversial (e.g., continuous aspiration of subglottic secretions to prevent VAP), and we did not address implementation of effective strategies whose use is not under the control of an individual provider (e.g., use of antimicrobial-coated central venous or urinary catheters).42, 43 The preventive interventions we focused on were determined through consensus, including discussion with our group of technical experts and peer reviewers; they are summarized in Table 2.

The IHI has recommended “bundles” of preventive interventions for targeting SSI, CLABI, and VAP. These bundles are intended to be applied to all eligible patients. Implementation of the “bundles” should be measured in an “all-or-none” format, whereby institutions should measure and report their adherence to all components of the bundle, and successful implementation is defined by adherence to all preventive interventions simultaneously.44 The IHI also encourages audit and feedback of the “all-or-none” measurements, as well as specific implementation strategies. Our target interventions are similar to those included in the bundles advocated in the IHI's 100,000 Lives campaign, with three exceptions. We did not target implementation of perioperative normothermia, as this intervention is not recommended by the CDC, and its overall effectiveness remains controversial.45 We also did not target universal stress ulcer prophylaxis and deep venous thrombosis (DVT) prophylaxis for ventilated patients. Universal stress ulcer prophylaxis has not been shown to reduce VAP (and may in fact increase it28), and DVT prophylaxis, while appropriate ICU care, is not directly linked to the prevention of VAP.

Inclusion and Exclusion Criteria

Included studies were required to:

  • Report the effect of an intervention on the incidence of healthcare-associated infection (SSI, CLABSI, VAP, or CAUTI), or report the effect of an intervention on adherence to evidence-based preventive interventions.
  • Use either an experimental design with a control group (randomized or quasi-randomized controlled trial, controlled before-after study) or a quasi-experimental design (interrupted time series or simple before-after study). Quasi-experimental studies were required to have a clearly defined intervention time period; interrupted time series designs required reporting of at least three time points of data before and after the intervention.

Thus, we included studies that reported either infection rates or process measures (e.g., rate of adherence to handwashing protocols). Trials that reported related outcomes, such as costs, health services utilization (e.g., length of stay), patient or provider satisfaction with care, or adverse events associated with the intervention, were included only if they included data on infection rates or process measures. We included studies whose QI strategy targeted implementation (or increased use of) any of the target preventive interventions, with one exception. A prior systematic review46 has addressed the effectiveness of QI strategies to promote appropriate hand hygiene; thus, we did not include studies that reported purely on hand hygiene adherence, but did include studies that targeted improving hand hygiene adherence and also reported the incidence of one or more of our target HAIs.

In contrast to previous volumes of this series,47, 48 we expanded our study design inclusion criteria to include simple before-after (SBA) studies, quasi-experimental studies in which there was no contemporaneous control group and fewer than three data points before and after the intervention. We did so after preliminary literature searches revealed a dearth of controlled trials in this field. We planned to separately analyze data from controlled trials, when possible.

Literature Search and Review Process

To identify studies for possible inclusion, we conducted a systematic search of the MEDLINE® database, using a combination of search terms specific to each target HAI. The full search strategy is shown in Appendix A *. We supplemented this search with a search of the Cochrane Collaboration's Effective Practice and Organisation of Care (EPOC) database, which includes the results of periodic searches of EMBASE®, CINAHL®, and MEDLINE® as well as hand searches of specific journals and article bibliographies.49 The MEDLINE® search was completed through January 2006 and the EPOC search through December 2005. We also screened the bibliographies of included articles to identify additional references.

A trained research assistant screened titles and abstracts [Appendix B *], and a physician investigator reviewed all exclusions. Articles that reported the effect of a quality improvement strategy on HAI rates or adherence to preventive interventions underwent full-text abstraction using a standardized form [Appendix B *]. Two independent reviewers, including at least one physician investigator, performed full-text reviews. The abstraction form recorded information on study design, methodological characteristics, quality improvement strategies, and outcomes; all disagreements were resolved by consensus.

Outcome Measures

Included studies reported two types of outcomes: rate of adherence to recommended preventive interventions, or rate of healthcare-associated infection. For adherence measures, we abstracted the data on adherence to our target preventive interventions (generally reported as the percentage of patients who received the intervention), or the adherence to an explicit clinical guideline (or “bundle”) for prevention of HAIs.

For studies reporting infection rates, we abstracted data using the definition of infection as defined in the study. The specific subtypes of infection varied slightly for each target HAI:

  • For surgical site infection, we abstracted information on all infections. When possible, we planned to analyze infection rates separately for the different classes of SSI, as defined by the CDC14: organ/space infections, deep incisional infections, and superficial incisional infections.
  • For central line-associated bloodstream infections, we were primarily interested in the effects of QI strategies on laboratory-confirmed bloodstream infection (LCBI), and separately abstracted information on catheter colonization or exit-site infection.
  • For ventilator-associated pneumonia, we abstracted information on all VAP.
  • For catheter-associated urinary tract infection, we abstracted information separately for symptomatic UTI and asymptomatic bacteriuria.

Measurement Issues Specific to Studies of Healthcare-Associated Infections

In comparison to previous reviews in this Series, unique measurement issues arise when evaluating quality improvement studies of efforts to reduce HAIs. The most widely accepted diagnostic criteria for HAIs are the NNIS definitions.14, 50 NNIS definitions for SSI, CLABSI, and CAUTI are summarized in Table 1. As can be seen, HAIs are not entirely objective measurements, unlike outcomes used in previous volumes such as laboratory values or antibiotic consumption; also, there are different subtypes of specific HAIs. Studies have demonstrated that slight differences in the interpretation of SSI definitions can lead to widely differing infection rates51, 52 even when the same subtype of SSI (e.g., only deep incisional infections) are being measured. Also, a given study might measure CLABSI using only NNIS-defined laboratory confirmed bloodstream infections (LCBI), or also include infections meeting the “clinical sepsis” criteria. While these differences in measurement should not affect the internal validity of a study, assuming measurement standards remain constant throughout the study, they may limit the ability to compare infection rates across studies.

Measurement of ventilator-associated pneumonia poses additional challenges. Currently, there is no easily applicable clinical definition for VAP. Recent research has focused on development of a gold standard for diagnosis using invasive methods, but these methods have not been widely implemented and remain under evaluation. Thus, studies performed at different times may have used slightly different diagnostic criteria, further limiting the comparability of infection rates across studies.

In this review, we will provide the data on incidence of HAIs as measured by the NNIS. Given the above limitations, these data are not intended for direct comparison to the incidence found by individual studies. However, NNIS data may be useful for identifying studies that have an unusually high (or low) baseline incidence of HAI.

Quality Issues Specific to Studies of Healthcare-Associated Infections

Quasi-experimental or simple before-after (SBA) studies are commonly used in quality improvement,53 but are prone to problems that limit establishing causality when determining the effect of an intervention. SBA studies are common in the infection control literature.54 Harris55 identified three factors that most often result in alternative explanations in quasi-experimental studies of infection control: (1) difficulty in controlling for important confounding variables, (2) results that are explained by the statistical principle of regression to the mean, and (3) maturation effects, secular trends that can affect either baseline or post-intervention measurements (e.g., seasonal variation in infection rates). We recognized that many of our included studies were likely to use SBA designs, and thus defined specific quality criteria for these studies to identify studies that would be less prone to the above flaws. Our goal was to identify studies where (within the limitations of the study design) causality could more reliably be attributed to the intervention. We used these criteria to gauge the internal and external validity of study results in order to identify studies of the greatest utility for stakeholders. We did not exclude studies based on the presence or absence of these quality criteria. The quality criteria are outlined below:

Factors Affecting the Internal Validity of the Studies

  • Was the intervention performed independent of other QI efforts or other changes?
    Non-randomized studies are inherently limited in their ability to account for confounding variables. In the complex hospital environment, many quality improvement efforts are generally underway that could affect the care and outcomes of diverse groups of patients. Failure to report on cointerventions or other contemporaneous QI measures could result in falsely attributing a change in infection rates to the effect of the QI intervention.
  • Did the study report data at more than one time point before and after the intervention?
    Infection control interventions are frequently implemented when infection rates are noted to be increasing or to exceed a recognized benchmark.55 Given this context, one would expect subsequent infection rates to decrease simply on the basis of regression to the mean (i.e., even without a specific intervention). If data are presented at a single time point before and after the intervention, this expected decrease due to regression to the mean could be interpreted as a beneficial effect of the intervention. Use of an interrupted time series design can determine if a true intervention effect exists; such a design requires at minimum three time points of data before and after the intervention, and use of time series regression models or autoregressive integrated moving-average (ARIMA) models for data analysis.49 In the absence of such a design, reporting of more than one time point before and after the intervention can at least indicate if the pre-intervention infection rate was consistent or abruptly increasing, and indicate if the post-intervention rate was sustained.
  • If the study reported infection rates, were process measurements also reported?
    Measuring adherence to process measures (i.e., adherence to the target preventive interventions) provides important complementary information to measurement of infection rates for several reasons. First, high-quality data links increased adherence to process measures to lower infection rates for SSI41 and CLABSI,23 but adherence in general practice is known to be suboptimal. Second, as mentioned above, elevated infection rates within a given hospital could be due to secular trends, such as outbreaks (e.g., with a genotypically distinct resistant bacteria) that may not be directly tied to poor infection control practices. If a simple before-after study documents both lower infection rates and improved adherence to process measures after an intervention, that provides more (albeit indirect) support for concluding that the intervention was truly effective. Finally, process measurements do not require adjustment for a patient's underlying risk of infection.5 This allows for greater inter-hospital and inter-study comparability than infection rates alone, and thus reporting of process measures can improve the external validity of a study as well as its internal validity. For these reasons, the CDC's Healthcare Infection Control Practices Advisory Committee suggested measurement of central venous catheter insertion practices and surgical antimicrobial prophylaxis for public reporting, in conjunction with reporting CLABSI and SSI rates.5

Factors Affecting the External Validity of the Studies

As process measures are unambiguous measurements with universal applicability, studies reporting process measures were considered to have greater external validity. We posed the following questions to assess study external validity for studies reporting infection rates.

  • If the study reported infection rates, did the study use CDC/NNIS methodology for measuring infections?
    NNIS definitions for nosocomial infections are the accepted standard in infection control, and their accuracy for case finding has been validated.56
  • (For CLABSI, VAP, and CAUTI) If the study reported infection rates, were reported rates adjusted for device utilization?
    HAI rates should be adjusted for potential differences in risk factors.5 Device-associated infections must be adjusted for the rate of use of the device in question, and in the NNIS system rates are reported as infections per 1,000 device-days.14 This does not take into account many other potential risk factors, but failure to perform this basic level of risk stratification would markedly limit the utility of a study's results.
  • (For SSI) If the study reported infection rates, was surveillance for infections performed after hospital discharge?
    Depending on the surgical procedure in question, a large proportion of infections may occur after discharge from the hospital. In fact, some studies have demonstrated that for common surgeries such as knee arthroplasty and abdominal hysterectomy, the majority of SSI may not manifest until after discharge.57, 58 Case-finding methods that do not perform post-discharge surveillance could thus substantially underestimate the incidence of SSI.

We used the same criteria as above to address the external validity of controlled studies. For internal validity of controlled studies, we used the following criteria, as used in previous volumes in the Series:

  • Method of treatment assignment
    • Were study subjects randomized, and if so, was the randomization process described?
    • For non-randomized studies, was the rationale for selection of the comparison group explained, and a baseline observation period included (to assess selection bias)?
  • Blinding
    • Were the outcome assessors blinded to treatment group assignment?
  • Statistical analyses
    • Was a unit-of-analysis error present? If so, were appropriate statistical methods used for correction?

Analysis

In previous volumes in this Series, we have noted marked variation in study populations, intervention characteristics, and methodologic features of the included studies, which have contributed to statistical heterogeneity.47, 48 We expected to encounter similar issues in this review, given the inherent issues in measurement outlined above (and the variation in interventions). In addition, we expected to find many simple before-after studies based on our preliminary literature searches. Thus, we did not plan to perform quantitative analysis, instead planning to summarize studies qualitatively. Using our study quality criteria as a framework, we planned to identify studies of relatively stronger internal validity and external validity for more detailed discussion. We opted not to use a scoring system for formally determining study quality, as the utility of these scores is controversial.59 In general, studies meeting both criteria for external validity and two of three criteria for internal validity were considered to be of stronger internal validity and external validity, and studies with serious flaws affecting internal validity (0 of 3 criteria met) were considered to have poor internal validity.

Footnotes

Appendixes cited in this report are provided electronically at http://www​.ahrq.gov/clinic​/tp/hainfgaptp.htm

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (785K)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...