Logo of bmjgroupThis articleSubmit a manuscriptOpen Access at BMJContact us
BMJ quality & safety
BMJ Qual Saf. Apr 2011; 20(Suppl_1): i13–i17.
PMCID: PMC3066698

Can evidence-based medicine and clinical quality improvement learn from each other?

Abstract

The considerable gap between what we know from research and what is done in clinical practice is well known. Proposed responses include the Evidence-Based Medicine (EBM) and Clinical Quality Improvement. EBM has focused more on ‘doing the right things’—based on external research evidence—whereas Quality Improvement (QI) has focused more on ‘doing things right’—based on local processes. However, these are complementary and in combination direct us how to ‘do the right things right’. This article examines the differences and similarities in the two approaches and proposes that by integrating the bedside application, the methodological development and the training of these complementary disciplines both would gain.

Keywords: Quality improvement, evidence-based medicine

Introduction

Those working in healthcare are aware of the considerable gap between what we know from research and what is done in clinical practice.1 For example, enforced bed rest is ineffective in several conditions where it is still used; exercise reduces mortality in heart failure but is rarely used; and brief counselling after trauma is fashionable but ineffective. The response to this well-documented problem includes both Evidence-Based Medicine (EBM)2 3 and Clinical Quality Improvement.4 5

The term EBM was coined by Gordon Guyatt in 1992 for the JAMA ‘User Guide’ series to describe the bedside use of research to improve patient care. At the time it was common at McMaster teaching hospitals for patients' notes to include a key research paper relevant to their care, and for this to be discussed at ward rounds (personal observation—PG). Improved information technology allowed Dave Sackett's team to use an ‘evidence cart’ on ward rounds at the John Radcliffe in Oxford; the team asked and answered two questions for every three patients, with the searches changing 1/3 of clinical decisions.6 Different specialties and different individuals have adopted the principles of EBM to different degrees and in vastly different ways (Interviews on www.cebm.net/index.aspx?o=4648 illustrate this diversity).

In parallel, the Quality Improvement (QI) movement emerged to address similar problems, but with a focus on recurrent problems within systems of care. The first methods used in the National Demonstration Projects in the 1980s were adopted from those introduced by Deming into industry.4 But the methods soon evolved to the different environment of healthcare, with the founding of the Institute for Healthcare Improvement, and the development of the breakthrough series collaborative, the model for improvement, and other modifications and extensions of QI methods.

EBM and QI have had overall similar goals but focus on different parts of the problem. EBM has focused more on ‘doing the right things’: actions informed by the best available evidence from our clinical knowledge base (figure 1), whereas QI has focused more on ‘doing things right’: making sure that intended actions are done thoroughly, efficiently and reliably; however, these are complementary (figure 1) and in combination direct us how to ‘do the right things right’.7

Figure 1
Relationships between Quality Improvement (QI) and Evidence-Based Medicine (EBM). (a) sequence of EBM followed by QI; (b) EBM uses clinical knowledge to inform individual clinical decisions about patient care; (c) QI focuses on improving recurrent problems ...

Before ‘fixing’ an evidence–practice gap, we would be wise to ask ‘is this the right thing to do?’ Is there really a problem? Is there something effective we can do about it? For example, many QI initiatives in diabetes had aimed to achieve low HbA1c levels8 for which the evidence was weak, and subsequent large scale randomised trials (ACCORD and ADVANCE) have suggested may be unhelpful or even harmful.9

The ‘right things right’ process might be illustrated by a familiar clinical example. In the clinic of one of the authors, the primary care team was considering whether to try to wean elderly patients from long-term benzodiazepines being given for insomnia. We and the patients seemed reluctant. However, the team reviewed the evidence10 and found the risk of falls on benzodiazepines was higher than we'd expected and other adverse effects, such as cognitive loss, added to the problems. So cessation was the ‘right thing’, but how best to achieve this? A review of the controlled trials showed weaning was possible, but that simple (but very slow) tapering was as effective as more intensive methods such as cognitive therapy counselling. Finally, sending a structured letter to the patient (on why and how to withdraw) had also been shown to be effective.11 Without this evidence review by the clinical team, we might have wasted a lot of effort on ineffective means to achieve our goal. But without a clinical improvement process to change our practice, this knowledge might not have provoked action. Of course, QI would also suggest additional questions such as how many patients are on which benzodiazepines and for how long? How many falls or other adverse events have occurred on the benzodiazepines? What will occur with our patient's anxiety once we remove the medication?

This article aims to look at what each of these disciplinary areas might learn from the other. The difference in approach of the two disciplines may be better understood by looking at the problem each perceives it is addressing.

The EBM perspective

One cause of the evidence–practice gap is information overload, for example, approximately 8000 references—including around 350 randomised trials—are added to MEDLINE each week. But only a small fraction of this is research that is sufficiently valid and relevant to change practice. So keeping up to date with new developments and information is problematic. One arm of EBM has been to synthesise and summarise this flood of research, and be able to access evidence wherever and whenever it is needed. To achieve this requires both ready access (to resources such as MEDLINE and the Cochrane Library) and skills (in finding, appraising and applying evidence) that few healthcare workers currently have. The EBM movement has focused on developing both the skills and tools to better connect research and clinical practice, with some12 but not universal successes.

A particular focus of EBM has been to take a more sceptical approach to innovation, asking for clear evidence before changing practice. Given that few innovations represent a real advance, this cautious approach means less disruption arising from unnecessary changes in practice.13

The QI perspective

The problem addressed by the QI approach might be characterised as the ‘knowing–doing’ gap: we know (individually and collectively) what to do, but fail to do it or fail to do it correctly. The gap has many causes, from simple lack of (some individuals) knowledge about what should be done, to uncertainties about the ‘how to’. For example, we may know that certain groups of patients should receive influenza vaccine but fail to do this because the system does not encourage reliable administration of the vaccine.

For example, at one institution, the electronic medical record (EMR) was fully implemented for many years, so the physicians-in-training, the staff physicians and nurses trusted the EMR. On discharge from the hospital a prompt in the EMR enquired whether a patient needed to receive the influenza vaccine. No one realised that this prompt led to a blind alley—no vaccine was ordered and no vaccine was given. The individuals (and the EMR) knew that influenza vaccine was the ‘right thing to do’, but the EMR and the culture of trusting the EMR inhibited ‘doing the right thing’. Fixing this problem proved to be a challenge—requiring several Plan–Do–Study–Act iterations. The simple change in the EMR system and the influenza vaccine ordering were low on the priority list for the IT support group. This necessitated that the care team create a work-around and learn how to reliably ‘do the right thing’ in the context and setting of care.14 Even an intervention as simple as administering influenza vaccine in a fully integrated EMR environment required a thorough knowledge of the system and then creativity in dealing with its limitations—much more than just the knowledge of the ‘right thing’.

The techniques of QI have focused on integrating what we know and moving it to how we should be doing it. Common techniques to achieve this are reminder systems (paper or computer), simplification of processes (reducing unnecessary steps), structuring the system to do the right thing at the right time and place (putting hand-washing equipment and/or instructions in the right place) and testing new interventions to determine what works in this particular setting. This task is often a creative and iterative process of identifying barriers, and working out solutions to overcome these.

Marrying EBM and QI

As illustrated by the above examples, EBM is involved in the early stages of checking the validity and applicability of the available evidence to the clinical problem. This involves the traditional ‘four steps’ of EBM illustrated in figure 2. QI processes15 may be triggered at the fourth step if it seems likely that the clinical problem is a common one for which the current system of practice is not optimal.16 Similarly, in the planning stage of a QI project there may be several questions that trigger an EBM cycle to check for evidence.

Figure 2
Proposed linkage between EBM and one model for QI.

In addition to this merging of EBM and QI processes, there are deeper organisational and epistemological issues in common which we briefly discuss in the next section.

The evidence for EBM and QI

A common criticism levelled at both EBM and QI is that there is only weak evidence that either process makes a difference to patient outcomes. However, neither EBM nor QI are a single ‘thing’ and cannot be evaluated in the same way as a fixed dose of a single chemical entity. Rather they are disciplines with multiple techniques that may be used to address the research–practice gap. For example, we know from large randomised trials that aspirin lowers mortality after myocardial infarction, but is underused; we would therefore want to improve appropriate usage. EBMers might focus on the ‘appropriate’ part (subgroups, balance of benefits and harms, etc); QIers might focus on the usage part (barriers, prescribing systems, etc). But success here could be largely judged by process measures—an increased use in aspirin—rather than in-hospital mortality. The link to mortality has already been proven in the trials. Hence, rather than enquire, ‘what is the evidence that EBM (or QI) are beneficial’, we should instead ask what are the best techniques within each discipline for achieving better practice? In order to improve, we need to know in what circumstances do those techniques work and how we can we disseminate those techniques?

The problems with assessing interventions by process or outcome measurements is illustrated by a recent systematic review17 of QI ‘collaboratives’ that focused on increasing the use of surfactant in premature infants. A collaborative (sometime called ‘breakthrough collaborative’) is a consortium of 20–40 healthcare organisations using QI methods on a specific healthcare quality problem. The review found nine eligible studies, including only two randomised trials. The reviewers concluded that the evidence for collaboratives was ‘limited’ because the first trial showed no effect, and the second trial showed ‘significant improvement in two specific processes of care but no significant improvement in patient outcomes (mortality and pneumothorax)’. However, their conclusions may ask too much of a trial's ability to detect real changes in outcomes. The improvement in processes of care included a substantial increase in surfactant use from 18% to 54% (a 36% increase). However, given the pooled trials of surfactant, which included 1500 randomised infants,18 was barely enough to demonstrate the mortality reduction, expecting to detect a mortality reduction by the collaborative is unrealistic. With the 36% improvement in surfactant use seen, to reliably detect the predicted mortality reduction, we would require a trial of collaboratives about nine times larger (9×1500), that is at least 13 500 infants individually randomised. This may be infeasible and unnecessary. If both steps (surfactant effectiveness and the QI process improvement: see figure 1) separately have strong evidence, then this represents a ‘complete causal chain’ whose evidence is equal to the evidence of the weakest link.19 The more important question here then is ‘What elements of the neonatal collaboration were important?’ and ‘How well will that transfer to other settings?’

By shifting the focus to specific methods we may ask more focused and answerable questions. For example, how effective are reminder systems to reduce ‘slips’? A systematic review of trials of reminders is a valuable resource for QI practitioners wanting to know when and how they do or don't work?20 Similarly, having to support a team's evidence-based practice, a ‘clinical informaticist’ is intuitively appealing and ‘do-able’,21 but evaluative trials, which show a positive impact of clinical informaticists on clinical decision making, are relatively recent.22 Some techniques, such as a QI collaborative, may be a complex mix of techniques and therefore intrinsically more difficult to evaluate. Evaluation is still worthwhile, but may shift focus to understanding what methods a particular collaborative used, and what seemed to work or not and in what circumstances.

Finally, the epistemology of both disciplines is evolving. A better understanding of the science and scientific rules of both areas will be important for their continued growth and impact. For example, an early but simplistic interpretation of EBM was that all interventions required randomised trial evidence. While important, we now recognise that different types of questions need different types of evidence,23 and that even for treatment questions occasional evidence from non-randomised studies, including some case series, can be sufficiently compelling.24

A way forward

Early QI methods in healthcare incorporated a link to evidence, but this connection seems to have faded over the years. In the early 1990s, The Hospital Corporation of America (HCA) developed and used FOCUS-PDCA that explicitly included a detailed analysis of the evidence for proposed changes, the processes and the data about local performance.25 This methodology developed into the PDSA cycle,15 a common, simple and effective technique, but one where the connection to evidence is less clear. We propose that re-establishing a clear connection between EBM and QI will benefit both disciplines and, ultimately, benefit patients and families. For those engaged in either QI or EBM (or hopefully both!) there are several implications, both epistemological and practical, of the complementary focus and methods of the two disciplines.

Those working in QI teams, before taking on a change should routinely check the validity, applicability and value of the proposed change and should not simply accept external recommendations. (Corollary: At least some members of a QI team must have high level skills in EBM.)

Those working in EBM should recognise that it is not sufficient to simply appraise the evidence, but at the end we should ask ‘what is the next action’16 (and sometimes enter a PDSA cycle) (Corollary: At least some members of an EBM team will need high level skills in QI.)

Those working on the methods of QI and EBM should stop being so concerned about whether the abstract concepts of EBM or QI ‘work’, and instead focus on development and evaluation of specific methods of each that sheds light on what elements are most effective in what circumstances. This evaluation should involve two related processes. First, recognise that ‘experiential learning’ is a cyclic process of doing, noticing, questioning, reflecting, exploring concepts and models (evidence), then doing again—only doing it better the next time (PDSA cycles).26 Second, when new potential generalisable techniques are developed, then these should be subjected to a more formal evaluation. Recently, several stages of evaluation specific to surgery have been proposed,27 which recognise the development and learning needed before a full evaluation. Related problems have been recognised in applying the Medical Research Council (MRC) complex interventions framework for health improvement.28 However, some creative tension between doing, developing and evaluating will always exist.

Finally, those teaching the next generation of clinicians should value both disciplines, which should be taught, integrated and modelled in clinical training.29 Medical curricula, undergraduate and postgraduate, and healthcare organisations should incorporate both EBM and QI training and these should be taught as an integral whole. Such training requires learning background skills and theory, but also ‘bedside’ teaching and modelling of how EBM and QI are applied in real clinical settings. By integrating the bedside application, the methodological development, the training and the organisation support of these complementary disciplines, hopefully, we can ever more frequently do the ‘right things right’.

Acknowledgments

We would like to thank Paul Batalden for initiating this topic, and Frank Davidoff for helpful comments.

Footnotes

Funding: This material is based on support and use of facilities at the White River Junction VA from the VA National Quality Scholars Program; Dr Glasziou is supported by an NHMRC Fellowship.

Competing interests: None declared.

Contributors: All authors contributed to the development of the ideas, the writing, and read the final manuscript.

Provenance and peer review: Not commissioned; externally peer reviewed.

References

1. Asch SM, Kerr EA, Keesey J, et al. Who is at greatest risk for receiving poor-quality health care? N Engl J Med 2006;354:1147–56 [PubMed]
2. Evidence-Based Medicine Working Group Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA 1992;268:2420–5 [PubMed]
3. Sackett DL, Rosenberg WM, Gray JA, et al. Evidence based medicine: what it is and what it isn't. BMJ 1996;312:71–2 [PMC free article] [PubMed]
4. Berwick DM. Continuous improvement as an ideal in health care. N Engl J Med 1989;320:53–6 [PubMed]
5. Batalden PB, Davidoff F. What is “quality improvement” and how can it transform healthcare? Qual Saf Health Care 2007;16:2–3 [PMC free article] [PubMed]
6. Sackett DL, Straus SE. Finding and applying evidence during clinical rounds: the “evidence cart”. JAMA 1998;280:1336–8 [PubMed]
7. Irwig L. An Approach to Evaulating Health Outcomes. NSW Pub Health Bull 1994;4:135–6
8. Shojania KG, Ranji SR, McDonald KM, et al. Effects of quality improvement strategies for type 2 diabetes on glycemic control: a meta-regression analysis. JAMA 2006;296:427–40 [PubMed]
9. Lehman R, Krumholz HM. Glycated haemoglobin below 7%. No to QOF target of less than 7%, again. BMJ 2010;340:c985. [PubMed]
10. Glass J, Lanctôt KL, Herrmann N, et al. Sedative hypnotics in older people with insomnia: meta-analysis of risks and benefits. BMJ 2005;331:1169. [PMC free article] [PubMed]
11. Heather N, Bowie A, Ashton H, et al. Randomized Controlled trial of two brief interventions against long-term benzodiazepine use: outcome of interventions. Addiction Research and Theory 2004:141–54
12. Straus SE, Ball C, Balcombe N, et al. Teaching evidence-based medicine skills can change practice in a community hospital. J Gen Intern Med 2005;20:340–3 [PMC free article] [PubMed]
13. Dixon-Woods M, Amalberti R, Goodman S, et al. Problems and promises of innovation: why healthcare needs to rethink its love/hate relationship with the new. BMJ Qual Saf 2011;20(S1):i47–i51 [PMC free article] [PubMed]
14. Ogrinc G, Batalden PB. Realist evaluation as a framework for the assessment of teaching about the improvement of care. J Nurs Educ 2009;48:661–7 [PubMed]
15. Langley GL, Nolan KM, Nolan TW, et al. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. San Francisco: Jossey-Bass, 1996
16. Glasziou P. Applying evidence: what's the next action? Evid Based Med 2008;13:164–5 [PubMed]
17. Schouten LM, Hulscher ME, van Everdingen JJ, et al. Evidence for the impact of quality improvement collaboratives: systematic review. BMJ 2008;336:1491–4 [PMC free article] [PubMed]
18. Soll R, Ozek E. Prophylactic protein free synthetic surfactant for preventing morbidity and mortality in preterm infants. Cochrane Database Syst Rev 2010;(1):CD001079. [PubMed]
19. Howick J, Aronson J, Glasziou P. Evidence-based mechanistic reasoning. JRSM 2010;103:433–41 (accepted). [PMC free article] [PubMed]
20. Shojania KG, Jennings A, Mayhew A, et al. Effect of point-of-care computer reminders on physician behaviour: a systematic review. CMAJ 2010;182:E216–25 [PMC free article] [PubMed]
21. Scura G, Davidoff F. Case-related use of the medical literature. Clinical librarian services for improving patient care. JAMA 1981;245:50–2 [PubMed]
22. Mulvaney SA, Bickman L, Giuse NB, et al. A randomized effectiveness trial of a clinical informatics consult service: impact on evidence-based decision-making and knowledge implementation. J Am Med Inform Assoc 2008;15:203–11 Epub 2007 Dec 20. [PMC free article] [PubMed]
23. Glasziou P, Vandenbroucke JP, Chalmers I. Assessing the quality of research. BMJ 2004;328:39–41 [PMC free article] [PubMed]
24. Glasziou P, Chalmers I, Rawlins M, et al. When are randomised trials unnecessary? Picking signal from noise. BMJ 2007;334:349–51 [PMC free article] [PubMed]
25. Batalden PB. ‘Building Knowledge for Improvement - An Introductory Guide to the Use of FOCUS-PDCA,’ Quality Resource Group, Hospital Corporation of America. Nashville, TN, 1992
26. Kolb D. Experiential Learning: Experience as the Source of Learning and Development. Upper Saddle River, NJ: Prentic Hall, 1984
27. McCulloch P, Altman DG, Campbell WB, et al. Nicholl J for the Balliol Collaboration No surgical innovation without evaluation: the IDEAL recommendations. Lancet 2009;374:1105–12 [PubMed]
28. Mackenzie M, O'Donnell C, Halliday E, et al. Do health improvement programmes fit with MRC guidance on evaluating complex interventions? BMJ 2010;340:c185. [PubMed]
29. Coomarasamy A, Khan KS. What is the evidence that postgraduate teaching in evidence based medicine changes anything? A systematic review. BMJ 2004;329:1017. [PMC free article] [PubMed]

Articles from BMJ Open Access are provided here courtesy of BMJ Group
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • MedGen
    MedGen
    Related information in MedGen
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...