The published medical literature supports the benefits of QI interventions to reduce unnecessary antibiotic prescribing and improve antibiotic selection in primary care practices. Overall, efforts to reduce prescribing of antibiotics for non-bacterial acute illnesses reduced prescription rates by an absolute value of 8.9% (IQR -12.4% to -6.7%). Interventions to improve antibiotic selection resulted in a 10.6% absolute increase in the rate of recommended antibiotic prescribing (IQR 3.4% – 18.2%). Similar effects were observed in studies not eligible for median effects analysis. The quality of included studies was generally fair, with similar problems to those seen in the prior reports on hypertension and diabetes in the Closing the Quality Gap series.
We did not find definitive evidence for the superiority of individual QI strategies or combinations of strategies. Active educational strategies appeared to be more effective than passive education, though this comparison did not achieve statistical significance in either antibiotic treatment or antibiotic selection studies. However, we also found evidence for the increased effectiveness of active educational strategies in studies not eligible for median effects analysis and in within-study comparisons in both groups (treatment and selection). In the selection studies, the combination of clinician education and audit and feedback appeared less effective than clinician education alone; this finding is likely due to confounding by sample size.
Study results were consistent across patient populations and disease processes. Interventions reported from outside the US appeared less effective than those based in America particularly in the selection group, but very few US-based studies were eligible for quantitative analysis. Very few included studies presented data on antimicrobial resistance, clinical outcomes, or costs, and no firm conclusions can be reached regarding these outcomes. Limited data does indicate that patient satisfaction is not impaired by interventions to reduce antibiotic use.
As noted in the Methods section, we were not able to perform meta-analysis or meta-regression due to the limited number of eligible studies and significant heterogeneity. Our alternative quantitative analysis approach consisted of calculating median effect sizes stratified by presence or absence of study design and intervention characteristics, then comparing these results using non-parametric tests. This approach does allow for quantitative comparisons among groups, but is limited in its ability to control for important confounders, and does not directly incorporate measures of study quality into the analysis of summary effects. As well, even the few statistically significant results we found have not been corrected for multiple comparisons.
In our analysis, we attempted to account for as many measurable potential confounders and effect modifiers as possible. However, we were limited by the description of the intervention and study setting provided in each article. Undoubtedly, many potential moderating factors go unmeasured simply due to lack of adequate description in the literature. These can include factors that can increase the likelihood of intervention success (e.g., a high degree of support from top management) and factors that decrease the likelihood (e.g., lack of resources to ensure adequate intervention reach). 51 We attempted to be as specific as possible in measuring the intensity of the interventions. However, intervention intensity may largely reflect the characteristics of individuals providing the intervention, including their own investment in the process and their relationships with the target population, neither of which was directly measured. 155 We attempted to abstract information on all facets of the intervention; despite these efforts, unmeasured confounders may yet have influenced our results. The combination of limited statistical power and the likelihood of unmeasured confounders cautions against using strict interpretations of our quantitative findings.
Although we did find greater effects in studies with smaller sample size in the selection studies, we did not find the same relationship in interventions targeting the treatment decision. One explanation for significant differences in effect sizes when stratified by sample size or other methodologic features is publication bias, the preferential publication of positive studies. Publication bias occurs more frequently for small sample size studies, resulting from the propensity for smaller studies and trials with less rigorous designs to be published if they report large improvements. 156 In two previous reviews 1, 2 of the QI literature, we identified significant inverse relationships between study sample size and the magnitude of reported effects (i.e., smaller studies reported larger effect sizes). Sample size may also be correlated with other important methodologic features, such as blinding. Other biases may influence individual study results. Studies of prescribing behavior targeting clinicians may be spuriously influenced by the Hawthorne effect 157 (in which the knowledge that one's behavior is under observation changes behavior). Another potential concern is that clinicians may engage in “code shifting,” listing a patient's diagnosis as one usually requiring antibiotics (e.g., pneumonia) instead of one not warranting antibiotics (e.g., bronchitis). However, this concern has not been borne out in the literature. 111, 158, 159
Evidence indicates that a clinician's decision to prescribe an antibiotic depends on a variety of factors relating to the health care system and patient beliefs and expectations, and is not solely dependent on the clinician's subjective beliefs or knowledge of evidence-based practice. In this light, it is reasonable to hypothesize that effective strategies to reduce inappropriate antibiotic prescribing should target multiple domains (clinician, patient, and health system). However, while nearly all included strategies targeted clinicians and many targeted patients, very few studies examined the effect of health system factors such as formulary restrictions or drug co-payments. Also, few interventions targeting clinicians specifically addressed the physician-patient interaction; helping clinicians understand and manage patient expectations for antibiotics could theoretically be more efficacious. Despite these omissions, we did find that strategies targeting patients or clinicians can positively influence prescribing rates. Further research on the effectiveness of health system-level interventions, as well as assessment of the interaction between the health system and clinician and patient-level interventions, will add greatly to our ability to design effective QI programs.
Finally, our results are limited to the short observation periods, as most followup periods were less than 1 year. Intervention designs that lead to sustained changes in antibiotic prescribing over multiple years might be different, with repeated public education/awareness and health system modifications playing a more dominant role.
Thus, many of the findings in this review raise more questions than they answer, and should be construed as hypothesis-generating. The interactions between each of the components of an intervention and the population it targets (as outlined in the conceptual framework in the introduction) are complex, and the limited number of studies available for analysis allow only a relatively rudimentary evaluation.
Despite the above caveats, the available studies do illustrate several distinct and effective approaches to improving prescribing behavior in a range of settings. While firm conclusions cannot be drawn, our data are consistent with several concepts, as described below. In the discussion, we will address the questions most relevant to stakeholders considering undertaking quality improvement efforts to reduce the inappropriate use of antibiotics. Organizations should examine their specific clinical settings as well as their clinician and patient populations, and compare studies performed in similar settings to identify specific QI strategies that are most likely to be effective in their own setting. To assist with this process, Appendix A * provides examples of key studies, and Appendix B provides details of each included study organized by setting and measured population.
1. Are quality improvement strategies to improve outpatient antibiotic use effective?
Our review found that the vast majority of published studies reported clinically significant improvements in antibiotic treatment and selection. The magnitude of these effects compares favorably to those achieved by quality improvement efforts in other settings. These findings were consistent across diverse patient populations and clinical settings. More than half the included studies were performed in the last decade; the concomitant decline in antibiotic prescribing in US ambulatory practices suggests that efforts to promote judicious antibiotic use are succeeding. However, inappropriate prescribing rates remain high, and inappropriate selection of antibiotics presents a continuing challenge.
In addition to benefiting patients and the community by reducing the adverse consequences of inappropriate antibiotic use, effective interventions may result in cost savings, although definitive evidence is lacking.
2. What are the critical components of effective intervention strategies to improve outpatient antibiotic use?
Antibiotic treatment studies. Within the antibiotic treatment studies, we were able to compare studies using clinician education alone with those using clinician and patient education. The addition of patient education to clinician education did not result in greater reductions in antibiotic prescribing, a finding that withstood evaluation for potential negative confounding. No distinct differences emerged among strategies, with the exception of a possible trend toward greater effectiveness in studies using active educational strategies. This finding was supported by systematic evaluation of studies that were not incorporated in the median effects analysis. 122, 126, 128
Potential negative confounding by unmeasured variables and limited power may affect our finding of an apparent lack of additional benefit of patient education over clinician education alone. Although studies included in median effects analyses did not show an additional benefit, two large population-based studies 122, 126 not incorporated in median effects analyses did demonstrate reductions in antimicrobial prescribing with a combined clinician and patient educational intervention. Of note, these studies were conducted in the U.S., but most included studies were from outside the U.S. It is conceivable that different practice styles, levels of baseline prescribing rates, and patient expectation of antibiotic treatment may make patient education more important in the U.S. than outside the country.
Antibiotic selection studies . Most selection trials eligible for median effects analysis employed one of two main QI strategies: clinician education alone, or clinician education in combination with audit and feedback. The absolute difference in median effect between these strategies was substantial; interventions adding audit and feedback to clinician education were less effective than interventions employing clinician education alone. The effectiveness of other types of QI strategies is difficult to systematically assess, since there were only three studies for all other categories of QI strategies.
The surprising finding that adding audit and feedback to clinician education results in smaller effects on prescribing may be explained in part by confounding. Interventions employing clinician education alone were substantially more likely to have below-median sample size; these smaller studies had overall larger effect sizes. While this may reflect publication bias, it is also possible that low sample size is acting as a proxy for studies that use local relationships and leadership to achieve greater clinician “buy-in” than studies spread over many sites. In addition to publication bias and sample size already noted, there may be other subtle confounders. For example, interventions that employed audit and feedback may have used less intensive methods to implement the clinician education component, spreading their energies among several intervention strategies rather than focusing on one. These possibilities are intriguing, but not directly testable in this analysis. A prudent conclusion may be that we are unable to definitively assess the relative efficacy of clinician education alone vs. clinician education combined with audit and feedback, but that we found no evidence to suggest that the combined approach is superior to clinician education alone.
Unlike the antibiotic treatment studies, there was no prominent association between the presence of active vs. passive types of educational interventions and antibiotic prescribing outcomes. It is difficult to assess whether this lack of association is true, or is merely negatively confounded by other observed or unobserved variables, compounded by limited power to detect differences due to low sample sizes. Other findings suggest that active educational interventions may in fact produce better outcomes. First, the differences in median effect size between active and passive techniques were in the expected direction, and of similar magnitude to differences observed in treatment studies, in which a trend (P=0.11) toward an association was observed, suggesting insufficient power. In addition, studies that compare two or more interventions to one another can be viewed as controlling for the many measured and unmeasured confounders that make inter-study comparisons so challenging. 160 In each of these cases, the active group outperformed the passive group. 135, 152
Our overall results thus supply cautious support for including active clinician education in quality improvement efforts for improving antibiotic use. Combining other strategies with clinician education does not necessarily improve outcomes, but our ability to detect such differences was limited.
3. Which patients and conditions should be targeted in order to exert the maximal impact on antibiotic prescribing?
We did not find any evidence that targeting specific patient populations resulted in significant differences in study effects, in either antibiotic treatment or antibiotic selection studies.
When selecting an intervention target, stakeholders and policymakers may wish to consider the potential population-level intervention effect, rather than simply the target-specific effect. Interventions that have highly significant effects on antibiotic prescribing for single conditions or limited patient populations may not necessarily exert large effects on overall antibiotic prescribing rates. At the population level, targeting all ARIs appears to translate into larger reductions in antibiotic consumption than focusing on a single condition or patient age range, and this might be an important factor in the willingness of purchasers/payors of health care to invest in appropriate antibiotic use interventions.
Interventions targeting antibiotic selection appeared to be equally effective across different disease processes and patient populations. Thus, stakeholders should determine the quality gap in their unique patient population, and target interventions appropriately.
4. What are the limitations of current research in this field and what areas require further study?
Included studies rarely reported important measures of the potential harms of the intervention. These include the potential for increased use of health services (e.g., return visits due to persistent symptoms) and adverse clinical consequences (e.g., increasing rates of serious infections due to undertreatment). Patient satisfaction does not appear to be affected in the limited number of studies that did report this outcome.
More importantly, very few of the included studies (and none of the US-based treatment studies) documented the resources required for completion of the intervention and measurements. Only four 102, 121, 143, 144 studies reported the cost savings resulting from changing prescribing behavior. Thus, although an active clinician education intervention may be more effective than a passive intervention, we are unable to make any statement regarding the cost-effectiveness of such an intervention. Given the apparent benefit of other, potentially less resource-intensive interventions, further research should include formal reporting of both costs of and cost savings from QI interventions. Further studies should also clearly document intervention intensity and reach, the baseline quality gap, and any other local factors that could have affected the intervention or outcomes measurement.
Changes in antibiotic resistance rates should be monitored not only for declines in resistance rates, but also for changes in the rate of rise in existing or new resistance patterns. This will require longer-term followup than measured in most studies. Based on the best evidence and mathematical modeling to date, it is unlikely that reductions in antibiotic consumption will lead to major reductions in levels of antibiotic resistance among community-acquired bacterial infections. 33 – 35 However, ecological studies do support the notion that the amount of antibiotic consumption in a community directly influences how rapidly new resistance emerges or rises. 7, 37 As the benefits from preventing antimicrobial resistance will be seen largely by the community at large, health care organizations may be reluctant to invest heavily in programs to reduce antimicrobial use without a clear business case. A demonstration that these types of interventions can recover their implementation costs through savings in antibiotic costs would be helpful. Our crude analysis in Figure 7 indicates that many of these interventions have the potential to be cost-neutral or cost-saving, depending on the cost of the intervention, the reduction in antibiotic prescriptions and the cost of antibiotics.
Study design and quality should be improved. Studies that formally evaluate the cost effectiveness of interventions to improve antibiotic treatment and selection are needed, and studies should evaluate the potential harms of such interventions.
* Appendixes cited in this report are provided electronically at http://www
.ahrq .gov/downloads/pub /evidence/pdf/medigap/medigap.pdf
Agency for Healthcare Research and Quality (US), Rockville (MD)
Ranji SR, Steinman MA, Shojania KG, et al. Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 4: Antibiotic Prescribing Behavior). Rockville (MD): Agency for Healthcare Research and Quality (US); 2006 Jan. (Technical Reviews, No. 9.4.) 4, Discussion.