• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of qualsafetyQuality and Safety in Health CareCurrent TOCInstructions for authors
Qual Saf Health Care. Dec 2006; 15(6): 433–436.
PMCID: PMC2464905

Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback

Abstract

Background

Many people advocate audit and feedback as a strategy for improving professional practice. The main results of an update of a Cochrane review on the effects of audit and feedback are reported.

Data sources

The Cochrane Effective Practice and Organisation of Care Group's register up to January 2004 was searched. Randomised trials of audit and feedback that reported objectively measured professional practice in a healthcare setting or healthcare outcomes were included.

Review methods

Data were independently extracted and the quality of studies were assessed by two reviewers. Quantitative, visual and qualitative analyses were undertaken.

Main results

118 trials are included in the review. In the primary analysis, 88 comparisons from 72 studies were included that compared any intervention in which audit and feedback was a component to no intervention. For dichotomous outcomes, the median‐adjusted risk difference of compliance with desired practice was 5% (interquartile range 3–11). For continuous outcomes, the median‐adjusted percentage change relative to control was 16% (interquartile range 5–37). Low baseline compliance with recommended practice and higher intensity of audit and feedback appeared to predict the effectiveness of audit and feedback.

Conclusions

Audit and feedback can be effective in improving professional practice. The effects are generally small to moderate. The absolute effects of audit and feedback are likely to be larger when baseline adherence to recommended practice is low and intensity of audit and feedback is high.

Audit and feedback is widely used as a strategy to improve professional practice. It appears logical that healthcare professionals would be prompted to modify their practice if given feedback that their clinical practice was inconsistent with that of their peers or accepted guidelines. Yet, feedback has not been found to be consistently effective.1,2,3,4,5,6,7,8 We updated a previous Cochrane review to deal with the following questions7:

  • Is audit and feedback effective in improving professional practice and healthcare outcomes?
  • How does the effectiveness of audit and feedback compare with that of other interventions, and can it be made more effective by modifying how it is done?

Methods

We identified relevant articles in the Cochrane Effective Practice and Organisation of Care register and pending file in January 2004. We also examined the reference lists of retrieved articles.

We included randomised controlled trials involving healthcare professionals. Audit and feedback was defined as “any summary of clinical performance of healthcare over a specified time period”. We included only those studies that objectively measured provider performance in a healthcare setting or healthcare outcomes.

Two reviewers (GJ and JMY) independently selected studies for inclusion, extracted data and assessed the quality of the study.7 An overall quality rating (high, moderate, low protection against bias) was assigned on the basis of the following criteria: concealment of allocation, blinded or objective assessment of primary outcome(s), and completeness of follow‐up (mainly related to follow‐up of professionals) and no important concerns in relation to baseline measures, reliable primary outcomes or protection against contamination. We assigned a rating of high protection against bias if the first three criteria were scored as done, and there were no important concerns related to the last three criteria, moderate if one or two criteria were scored as not clear or not done, and low if more than two criteria were scored as not clear or not done.

We considered audit and feedback in addition to interactive, small group meetings separately from audit and feedback in combination with written educational materials or didactic meetings, which have been found to have little or no effect on professional practice.1,9,10 We defined “multifaceted” interventions as including two or more interventions.

We categorised the intensity of feedback as “high”, “moderate” or “low” on the basis of combinations of the following components: recipient, format, source, frequency, duration and content of the feedback.

The complexity of the targeted behaviour and the seriousness of the outcome were categorised in a subjective manner independently by GJ and JMY, or GJ and ADO as “high”, “moderate” or “low”. Baseline compliance with targeted behaviour for dichotomous outcomes was based on the mean value of pre‐intervention level of compliance in both the audit and feedback group and the control group.

Analysis

We included only studies of moderate or high quality in the primary analyses, and studies that reported baseline data. Three analyses were conducted across all types of interventions (audit and feedback alone, audit and feedback with educational meetings or audit and feedback as part of a multifaceted intervention compared with no intervention): one using the adjusted risk ratio as the measure of effect, one using the adjusted risk difference and the third using the adjusted percentage change relative to the control mean after the intervention. All outcomes were expressed as compliance with the desired practice. Professional and patients' outcomes were analysed separately.

We considered the following potential sources of heterogeneity to explain variation in the results:

  • the type of intervention
  • the intensity of the feedback
  • the complexity of the targeted behaviour
  • the seriousness of the outcome
  • baseline compliance with desired practice
  • study quality

We visually explored heterogeneity by preparing tables, and bubble and box plots to explore the size of the observed effects in relationship to each of these variables. We also plotted regression lines to aid the visual analysis of the bubble plots.

The visual analyses were supplemented with meta‐regression to examine how the size of the effect was related to the six potential explanatory variables, weighted according to the number of healthcare professionals. The main analysis comprised a multiple linear regression using only main effects; baseline compliance was treated as a continuous explanatory variable and the others as categorical. These analyses were conducted using generalised linear modelling in SAS.

As there were important baseline differences in compliance between the intervention and control groups in many studies, our primary analyses were based on adjusted estimates of effect, where we adjusted for baseline differences in compliance.

Results

There are 118 trials in the review, including 30 new studies in this update. In all, 44 studies were classified as high quality and 14 as low quality, with the rest scored as moderate quality.

A total of 88 comparisons from 72 studies with more than 13 500 health professionals compared audit and feedback alone or audit and feedback as a component of an intervention to no intervention. It included 64 comparisons of dichotomous outcomes from 49 trials, and 24 comparisons of continuous outcomes from 23 trials. The adjusted relative risk (RR) of compliance with desired practice varied from 0.71 to 18.3 (median 1.08, interquartile range 0.99–1.30), and the adjusted risk difference of compliance with desired practice varied from −0.16 (a 16% absolute decrease in compliance) to 0.70 (a 70% increase in compliance; median 0.05, interquartile range 0.03–0.11). For continuous outcomes, the adjusted percentage change relative to control varied from −0.10 to 0.68 (median 0.16, interquartile range 0.05–0.37).

Baseline compliance and intensity of audit and feedback were identified as significant in the multiple linear regression of the adjusted RR (main effects model). The estimated coefficient for the baseline was −0.005 (p = 0.05), indicating smaller effects as baseline compliance increased (fig 11).). The intensity of audit and feedback may also explain some of the variation in the relative effect (p = 0.01; fig 22).). For analyses of adjusted risk difference (RD) and continuous outcomes, none of the variables that we examined helped to explain the variation in relative effects across studies.

figure qc18549.f1
Figure 1 Plot of adjusted relative risk (RR) versus baseline compliance, excluding one study. A & F, audit and feedback; Edu, education.
figure qc18549.f2
Figure 2 Box plot. Adjusted relative risk (RR) versus intensity of audit and feedback (A & F), excluding one study. Edu, education.

In the exploratory analysis of adjusted RD, we pooled studies including audit and feedback with or without educational meetings into one category. In the analysis of interaction between the intensity of audit and feedback and the type of intervention, the type of intervention helped to explain the observed variation in the absolute effect (p = 0.001; fig 33).). The estimated mean adjusted RD not adjusted for other terms in the model was 2.1 for studies of audit and feedback with or without educational meetings, whereas it was 9.2 for multifaceted intervention. Intensity of audit and feedback may also help to explain the variation in the absolute effect for adjusted RD in this analysis (p = 0.04).

figure qc18549.f3
Figure 3 Box plot. Adjusted relative difference (RD) versus intervention type, excluding one study. A & F, audit and feedback; Edu, education.

Audit and feedback combined with other interventions compared with audit and feedback alone

A total of 35 comparisons from 21 trials compared various combinations of interventions including audit and feedback with audit and feedback alone. Adding reminders,11,12,13,14 incentives,15,16 outreach17,18,19 or opinion leaders20,21,22 to audit and feedback showed mixed results, but no consistent increase in effect was found by adding any of these interventions to audit and feedback. Similarly, the addition of self‐study, a practice‐based seminar, patient education materials, assistance to develop an office system or a recall system or quality improvement tools did not increase the effectiveness of audit and feedback alone.23,24,25,26,27,28

Audit and feedback compared with other interventions

Eight comparisons from seven trials compared audit and feedback with other interventions. Reminders improved practice more than feedback in two studies,13,14 but patient education was not found to be better than feedback in a trial to improve prescribing of antibiotics.25 A practice‐based seminar was not more effective than feedback to improve compliance with guidelines for magnetic resonance imaging of the lumbar spine and knee,24 and feedback or self‐study had the same effect on the percentage of patients with controlled blood pressure in another study.23 In one study that compared feedback with incentives, the doctors in the incentive group reduced the mean number of tests order scored by 50%, whereas those in the feedback group did not change as much.29 A local opinion leader group reduced caesarean section rates more than an audit and feedback group in another study.30

Different types of audit and feedback

Seven studies compared different ways of providing feedback. Three studies compared feedback with and without peer comparison without finding any difference between groups.31,32,33 Feedback on medication compared with feedback on performance resulted in no difference in control of blood pressure.34

In one study, mutual visits and feedback by peers was compared with visits and feedback by a non‐physician observer to improve performance related to 208 indicators of practice management.35 Both programmes showed improvements in some aspects of care after a year, but the improvement was more noticeable after mutual practice visits than after a visit by a non‐physician observer.35 Ward et al19 compared audit and feedback complemented by outreach by either a doctor or a nurse. The groups did not differ significantly after intervention in process of care score for diabetes (adjusted post difference = 0.5).

No difference in prophylaxis for venous thromboembolism was found in a study comparing group feedback with group and individual feedback.36

Discussion

Audit and feedback can be a useful intervention, but the effects of audit and feedback vary widely from an apparent negative to very large positive effect in the trials included in this review.

For dichotomous outcomes, baseline compliance helped to explain the variation in the relative effectiveness across studies. However, the relative effectiveness did not increase dramatically with decreasing baseline compliance (a change of 0.05 in the adjusted RR relative to a decrease of 10% in the baseline compliance). There was also more variation in the adjusted RRs when baseline compliance was lower (fig 11).). The intensity of audit and feedback also appeared to explain variation in the adjusted RR for audit and feedback with or without educational meetings. In multifaceted interventions the contribution of audit and feedback was often small and the effectiveness of multifaceted interventions may depend more on components of the intervention than audit and feedback. We did not find many head‐to‐head comparisons of different intensities of feedback and such studies are needed.

On the basis of earlier reviews,1,10 we have considered printed educational materials to have little or no effect on changing professional practice. However, a recent major review on guideline implementation strategies8 found that printed educational materials might have an effect. This presents a problem in interpretation of our results as we have considered printed materials as no intervention. This may lead to an underestimation of the effect of audit and feedback in studies that compared audit and feedback alone with printed materials, but also to an overestimation of the effect of audit and feedback in studies where audit and feedback along with printed materials are compared with no intervention.

We did not find a significant difference in the relative effectiveness of different types of interventions, but when we combined audit and feedback alone and audit and feedback with educational meetings into a single category, the absolute effect (adjusted RD) was significantly larger for multifaceted interventions than for the combined category. However, the difference in the median adjusted RD is small and the ranges of RDs are overlapping (fig 33).). These findings are more consistent with the conclusions of a review of interventions to implement clinical practice guidelines8 than they are with an earlier overview of systematic reviews of interventions to change professional practice.1

Seven studies provided direct, randomised comparisons of different ways of providing audit and feedback. On the basis of these comparisons and indirect comparisons across studies, it is not possible to determine what, if any, features of audit and feedback have an important effect on its effectiveness. Although there are hypothetical reasons why some forms of audit and feedback might be more effective than others, there is no empirical basis for deciding how to provide audit and feedback. There is a need for well‐designed process evaluations embedded in trials to explore and provide insights into the complex dynamics underlying the variable effectiveness of audit and feedback.

We found only seven studies of audit and feedback compared with other interventions. The results of the two comparisons of audit and feedback with reminders13,14 are consistent with the conclusions of Buntinx et al37 that both can be effective, and do not provide strong support for either being clearly superior, although the reminder group performed better than the audit and feedback group in both of these studies. To the extent that these results can be considered reliable, they support Mugford et al's conclusions that feedback close to the time of decision making is likely to be more effective,3 as reminders by definition occur at the time of decision making.

The evidence presented here does not support mandatory use of audit and feedback as an intervention to change practice. However, audit is commonly used in the context of governance, and it is essential to measure practice to know when efforts to change practice are needed. In these circumstances, health professionals may receive feedback without explicitly having the responsibility to implement changes on the basis of that feedback. The effects of audit and feedback may be larger when health professionals are actively involved and have specific and formal responsibilities for implementing change.

Conclusions

Audit and feedback can be effective in improving professional practice, but the effects are generally small to moderate. The absolute effects are more likely to be larger when baseline compliance with recommended practice is low and, for audit and feedback with or without educational meetings, when feedback is provided more intensively.

Acknowledgements

We thank Dave Davis, Brian Haynes, Nick Freemantle, Emma Harvey and Cythia Fraser for their contributions to the first version of this review. We also thank Jessie McGowan for conducting searches for this update.

Footnotes

Competing interests: None declared.

Further information: A table of all included studies and results tables is available on request from the corresponding author.

References

1. Grimshaw J M, Shirran L, Thomas R. et al Changing provider behaviour: an overview of systematic reviews of interventions. Med Care 2001. 39(Suppl 2)II2–II45.II45 [PubMed]
2. Grimshaw J M, Thomas R E, Maclennan G. et alEffectiveness and efficiency of guideline dissemination and implementation strategies. Final report. Aberdeen: Health Services Research Unit, University of Aberdeen, 2002
3. Mugford M, Banfield P, O'Hanlon M. Effects of feedback of information on clinical practice: a review. BMJ 1991. 303398–402.402 [PMC free article] [PubMed]
4. Axt‐Adams P, van der Wouden J C, van der Does E. Influencing behaviour of physicians ordering laboratory tests: a literature study. Med Care 1993. 31784–794.794 [PubMed]
5. Buntinx F, Winkens R, Grol R. et al Influencing diagnostic and preventive performance in ambulatory care by feedback and reminders: a review. Family Pract 1993. 10219–228.228 [PubMed]
6. Balas E A, Boren S A, Brown G D. et al Effect of physician profiling on utilization. Meta‐analysis of randomised clinical trials. J Gen Intern Med 1996. 11584–590.590 [PubMed]
7. Jamtvedt G, Young J M, Kristoffersen D T. et al Audit and feedback: effects on professional practice and health care outcomes (Cochrane Review). In: The Cochrane Library, Issue 3. Oxford: Update Software, 2003 [PubMed]
8. Grimshaw J M, Thomas R E, MacLennan G. et al Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess 2004. 884 [PubMed]
9. Thomson O'Brien M A, Freemantle N, Oxman A D. et al Continuing education meetings and workshops: effects on professional practice and health care outcomes (Cochrane Review). In: The Cochrane Library. Oxford: Update Software, 2001
10. Freemantle N, Harvey E L, Wolf F. et al Printed educational materials: effects on professional practice and health care outcomes (Cochrane Review). In: The Cochrane Library, Issue 2. Oxford: Update Software, 2003
11. Baker R, Farooqui A, Tait C. et al Randomised controlled trial of reminders to enhance the impact of audit in general practice on management of patients who use benzodiazepines. Qual Health Care 1997. 614–18.18 [PMC free article] [PubMed]
12. Buffington J, Bell K M, LaForce F M. A target‐based model for increasing influenza immunizations in private practice. J Gen Intern Med 1991. 6204–209.209 [PubMed]
13. Eccles M, Steen N, Grimshaw J. et al Effect of audit and feedback, and reminder messages on primary‐care radiology referrals: a randomised trial. Lancet 2001. 3571406–1409.1409 [PubMed]
14. Tierney W M, Hui S L, McDonald C J. Delayed feedback of physician performance versus immediate reminders to perform preventive care. Effects on physician compliance. Med Care 1986. 24659–666.666 [PubMed]
15. Fairbrother G, Hanson K L, Friedman S. et al The impact of physician bonuses, enhanced fees, and feedback on childhood immunization coverage rates. Am J Public Health 1999. 89171–175.175 [PMC free article] [PubMed]
16. Hillman A L, Ripley K, Goldfarb N. et al The use of physician financial incentives and feedback to improve pediatric preventive care in medicaid care. Pediatrics 1999. 104931–935.935 [PubMed]
17. Borgiel A E M, Williams J I, Davis D A. et al Evaluating the effectiveness of 2 educational interventions on family practice. CMAJ 1999. 8965–970.970 [PMC free article] [PubMed]
18. Siriwardena A N, Rashid A, Johnson M R D. et al Cluster randomised controlled trial of an educational outreach visit to improve influenza and pneumococcal immunisation rates in primary care. Br J Gen Pract 2002. 52735–740.740 [PMC free article] [PubMed]
19. Ward A, Kamien M, Mansfield F. et al Educational feedback in management of diabetes in general practice. Educ Gen Pract 1996. 7142–150.150
20. Guandagnoli E, Soumerai S B, Gurwitz J H. et al Improving discussion of surgical treatment options for patients with breast cancer: local medical opinion leaders versus audit and performance feedback. Breast Cancer Res Treat 2000. 61171–175.175 [PubMed]
21. Sauaia A, Ralston D, Schluter W W. et al Influencing care in acute myocardial infarction: a randomized trial comparing 2 types of intervention. Am J Med Qual 2000. 15197–206.206 [PubMed]
22. Soumerai S B, McLaughlin T J, Gurwitz J H. et al Effect of local medical opinion leaders on quality of care for acute myocardial infarction: a randomized controlled trial. JAMA 1998. 2791358–1363.1363 [PubMed]
23. Dickinson J C, Warshaw G A, Gehlbach S H. et al Improving hypertension control: impact of computer feedback and physician education. Med Care 1981. 19843–854.854 [PubMed]
24. Robling M R, Houston H L, Kinnersley P. et al General practitioners use of magnetic resonance imaging: a open randomized controlled trial of different methods of local guidelines dissemination. Clin Radiol 2002. 57402–407.407 [PubMed]
25. Mainous A G, Hueston W J, Love M M. et al An evaluation of statewide strategies to reduce antibiotic overuse. Fam Med 2000. 32(1)22–29.29 [PubMed]
26. Kinsinger L S, Harris R, Qaqish B. et al Using an office system intervention to increase breast cancer screening. JGIM 1998. 13507–514.514 [PMC free article] [PubMed]
27. Moher M, Yudkin P, Wright L. et al Cluster randomised controlled trial to compare three methods of promoting secondary prevention of coronary hearth disease in primary care. BMJ 2001. 3221338 [PMC free article] [PubMed]
28. Hayes R, Bratzler D, Armour B. et al Comparison of an enhanced versus written feedback model on the management of Medicare inpatients with venous thrombosis. Joint Commission J Qual Improve 2001. 27155–168.168 [PubMed]
29. Martin A R, Wolf M A, Thibodeau L A. et al A trial of two strategies to modify the test‐ordering behavior of medical residents. N Engl J Med 1980. 3031330–1336.1336 [PubMed]
30. Lomas J, Enkin M, Anderson G M. et al Opinion leaders vs audit and feedback to implement practice guidelines. Delivery after previous cesarean section. JAMA 1991. 2652202–2207.2207 [PubMed]
31. Kiefe C I, Allison J J, Williams O D. et al Improving quality improvement using achievable benchmarks for physician feedback: a randomized controlled trial. JAMA 2001. 2852871–2879.2879 [PubMed]
32. Søndergaard J, Adersen M, Vach K. et al Detailed postal feedback about prescribing to asthma patients combined with a guideline statement showed no impact: a randomised controlled trial. Eur J Clin Pharmacol 2002. 58127–132.132 [PubMed]
33. Wones R G. Failure of low‐cost audits with feedback to reduce laboratory test utilization. Med Care 1987. 2578–82.82 [PubMed]
34. Gullion D S, Tschann J M, Adamson T E. et al Management of hypertension in private practice: a randomized controlled trial in continuing medical education. J Cont Ed Health Prof 1988. 8239–255.255
35. van der Hombergh P, Grol R, van den Hoogen H J. et al Practice visits as a tool in quality improvement: mutual visits and feedback by peers compared with visits and feedback by non‐physician observers. Qual Health Care 1999. 8161–166.166 [PMC free article] [PubMed]
36. Anderson F A, Jr, Wheeler H B, Goldberg R J. et al Changing clinical practice. Prospective study of the impact of continuing medical education and quality assurance programs on use of prophylaxis for venous thromboembolism. Arch Intern Med 1994. 154669–677.677 [PubMed]
37. Buntinx F, Knottnerus J A, Crebolder H F. et al Does feedback improve the quality of cervical smears? A randomized controlled trial. Br J Gen Pract 1993. 43194–198.198 [PMC free article] [PubMed]

Articles from Quality & Safety in Health Care are provided here courtesy of BMJ Group
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • Cited in Books
    Cited in Books
    PubMed Central articles cited in books
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...