All rights reserved. No part of this publication may be reproduced, stored in any retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publishers and copyright holder or in the case of reprographic reproduction in accordance with the terms of licences issued by the appropriate Reprographic Rights Organisation.
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Irwig L, Irwig J, Trevena L, et al. Smart Health Choices: Making Sense of Health Advice. London: Hammersmith Press; 2008.
Smart Health Choices: Making Sense of Health Advice.
Show detailsMedicine is indeed in the middle of an intellectual revolution. Methods of reasoning and problem solving that might have worked well in the past are not sufficient to handle today’s problems.
David Eddy1
Recently a friend was describing some treatment that her father had been given. It didn’t sound like he was doing well on the medication. When I suggested that there may be a more appropriate treatment, her response was, ‘But surely a qualified doctor would know what’s best.’
Unfortunately, it is not always safe or wise to make this assumption. It’s virtually impossible for health professionals to keep completely up to date with the latest and best research treatments and tests. Gone are the days when a doctor could stay in touch by reading a few key journals each week.
To give you some idea of the extent of medical information overload, it has been estimated that about 560,000 new medical articles are published every year and 20,000 new randomised trials are registered. That’s equivalent to 1500 new articles per day and 55 new trials.2 There certainly has been an enormous change since the 1970s when Archie Cochrane and others suggested a more systematic approach to assessing health treatments through randomised trials.
Health professionals, like most of us, struggle with time pressures and face real challenges as they juggle clinical matters and the need to keep up-to-date with the latest good quality research. Even if they can access such information efficiently, there are many other challenges in communicating with patients about the pros and cons of various treatment options and finding out what the patient’s preferences might be. This is not easy to achieve in a 10-minute consultation in addition to taking a thorough history and examining the patient!
This problem is reflected by the many studies that have shown a widespread variation in the rates of various medical procedures that cannot be explained away by intrinsic differences in the populations. Boston and New Haven, for example, have similar populations in terms of their healthcare needs. Most of their practitioners are associated with internationally renowned medical centres. Yet New Haven residents have been reported to be about twice as likely to undergo a bypass operation for heart disease as their counterparts in Boston, who are more likely to be treated by other means. On the other hand, Bostonians are much more likely to have their hips and knees replaced by a surgical prosthesis than are New Havenites, whose physicians tend to prescribe medical treatments for these conditions. Bostonians are more than twice as likely to have arteries in their necks unblocked as a way of preventing strokes whereas clinicians in New Haven prefer to recommend aspirin and other drug treatments. By contrast, hysterectomies for non-cancerous conditions of the uterus are more often performed in New Haven.
Other studies, in the USA, the UK and Australia, have found similar variations in medical procedures, which reflect different approaches to managing the same conditions. This may come about for a number of reasons, including differences in access to equipment or facilities, in training or in financing arrangements. But such variations can also arise because experts specialising in the same problems have different views about the best way to treat them. It is possible that some of those treatments are better than others.
But even if the experts did all agree about the best way to manage a particular condition, this does not necessarily mean that they are all correct – they may all be wrong. There are also dangers in relying on a consensus of experts – which has traditionally been the basis of many medical recommendations. Consensus may merely represent a middle ground between opposing views and may not accurately represent any expert view, or it may represent the views of the most persuasive or influential expert who might also be the most uninformed about the valid evidence. So we can’t rely on advice or opinions just because they come from a so-called expert or ‘a leading authority in the field’.
Why the experts disagree
It can be very confusing when the experts disagree about our healthcare. Such disagreement reflects both the complexity of healthcare and the uncertainty about what will be the outcome of a particular intervention.
Healthcare decisions are complex
When our grandparents and great-grandparents were raising families, practitioners had relatively limited tools and knowledge. Their advice was far simpler than it is these days, and the outcomes of treatment tended to be more obvious and immediate. Premature death was far more common.
Say, for example, your great-grandfather complained to his doctor of a pain in the stomach. It may have been caused by a minor gastric inflammation, in which case he would have recovered spontaneously within a few days, irrespective of treatment. Or it may have been a stomach cancer that inevitably would have killed him. In the first instance, the practitioner would have been praised for the old man’s recovery and the treatment hailed as a cure. In the latter situation, you and your grieving relatives probably would have taken the philosophical view that some things are beyond the ken of doctors.
If your great-grandfather had been seeking help now, he and his practitioner would have far more information to consider and weigh up, including choosing from a wide range of diagnostic tests and treatments. Healthcare has become so much more complex, increasing the choices for treatment, but also increasing the chances that practitioners will disagree about which is the best option.
Health outcomes are uncertain
Another important reason for differences in expert opinion is the uncertainty of health outcomes – the same disease will have a different effect on different people. Nor can it always be accurately predicted how an intervention – whether surgery or a medication – will affect different people. Clearly, then, different practitioners will have different experiences. The best way of dealing with this uncertainty is to turn to studies of groups of people to find out what is the most likely outcome. This probabilistic evidence predicts the chance that a particular outcome will occur for a particular intervention in a given situation.
The complexity and uncertainty of healthcare help to explain why experts today face a new era: one that demands a high level of skill in evaluating information so that they can make sense of the growing body of research literature and apply the best available evidence to their patients’ care:
For centuries, the practice of medicine has been based on one huge assumption. The assumption is that physicians instinctively know the right thing to do. We call it ‘clinical judgement’ or the ‘art of medicine’. Somehow, the assumption goes, physicians are able to assimilate all they have learned from their medical education, their training, research, their personal experiences, and conversations with their colleagues, as well as all the information about their patients – their signs, symptoms, hopes, and fears – to determine the right thing to do.
David Eddy3
Fortunately, there is now an international push to ensure that health care is based on evidence rather than experts’ opinions or consensus. Clearly, good healthcare requires that practitioners use clinical judgement together with the best evidence. Alone, neither is enough.
Practitioners may be poorly informed
Evidence-based healthcare is becoming more widely used by responsible practitioners worldwide. This has been possible largely because of the growth and availability of electronically accessible information offering practitioners and consumers previously unimaginable possibilities for making the best health decisions. The problem is that not all of this information is reliable. Much of it is based on poor quality studies. However, practitioners are being trained to access and assess the best quality of research.
Not all practitioners practise evidence-based health care
Although usually well intentioned, practitioners may not offer optimal care because many are not integrating the best available evidence into their decisions. This evidence is accessible through electronic databases, from good quality journals and from evidence-based guidelines.
Even when good quality evidence is available, not all practitioners are using it. This is partly because there are often delays between the results of research and the publication of easily accessible recommendations based on the research, and partly because old habits die hard. Many practitioners are resistant to changing practices that have become routine even when they may no longer be appropriate.
Not all practitioners know where to find the evidence
Practitioners might not know where to find the relevant, evidence-based information. Traditionally, many have relied on sources such as medical education, their own experience, previous and continuing medical education, and pharmaceutical companies – sources that are often inappropriate, biased or out of date. Indeed, medical schools have traditionally concentrated on the basic sciences – such as anatomy, physiology and biochemistry – and have begun teaching skills in critical appraisal of studies only over the past decade. There is an ever-increasing number of clinical practice guidelines based on the best available research but sometimes these can be difficult to find and to use with the patient there on the spot.
We can tell that many practitioners lack the skills to judge studies because of the fact that much poor quality research is still being cited as the basis for a large number of health practices and products. We should also remember that medicine has a long history of not recognising the harms of some interventions.
The most famous example is thalidomide – a drug that was considered to be safe enough to be widely used to treat morning sickness in the early 1960s before it was found to cause limb deformities in the developing fetus. But there are many more such examples – tonsillectomies were once commonly performed on children in the belief that they prevented repeated bouts of throat infections. A number of surgical deaths forced a reassessment of this procedure and a significant reduction in its use. Early in the twentieth century, babies’ mouths were routinely cleaned in the belief that it reduced germs. Only later was it recognised that this cleaning caused ulcers of the palate. In the 1950s many patients with dangerously high blood pressure underwent traumatic surgery to remove the nerves running down either side of their spines. The operation was of doubtful value, but could cause terrible side effects. More recently, the antiarthritis pill rofecoxib was taken off the market when serious side effects emerged after the drug’s widespread introduction.
‘Safe’ does not mean ‘risk free’
So when a practitioner tells you that a treatment or test is generally safe, be aware that there may be harms that have not yet been discovered. ‘Safe’ often means that there are no known harms. And don’t assume that, because something is said to be ‘natural’, it is risk free. ‘Natural’ and ‘harmless’ are not the same. Vitamin supplements taken in excess and some herbal products can have dangerous side effects, ranging from headaches to liver damage. As for any intervention, their harms might not be immediately obvious and, indeed, may emerge only after years of use or after large, high-quality studies have been done. As with any other intervention, their use should be handled with care.
Evidence can sometimes be distorted by drug companies
The story of the anti-arthritis drug, rofecoxib, illustrates a number of these points very nicely.
One of the difficulties facing people with arthritis is the fact that some of the commonly used anti-inflammatory drugs can cause nausea, belching and, even more seriously, ulcers in the upper gastric tract. A drug that would have the same pain-relieving effects but fewer side effects would obviously be desirable, and there was much interest in a newer generation of anti-inflammatories called the COX-2 (cyclo-oxygenase 2) inhibitors.
In 2000, the New England Journal of Medicine, one of the medical world’s most prestigious journals, published the results of a randomised controlled trial (the VIGOR study), which included 8076 patients with rheumatoid arthritis. Participants were randomly assigned to receive either the new COX-2 inhibitor,4 rofecoxib, or the more commonly used drug naproxen. That sounds good, you might say, having read the earlier chapters of this book.
In that paper, the authors commented that the naproxen recipients had a lower rate of heart attacks (1 per 1000) over a 9-month follow-up compared with the rofecoxib group (4 per 1000). Note that the other way you could report this is that the rofecoxib group had a higher rate of heart attacks than the naproxen group. This is called a framing effect. In other words, how information is presented or framed can affect how it is interpreted.
It was thought at that time, that the difference in cardiovascular event rates was caused by the fact that a lot of the heart attack sufferers should have been taking aspirin. They also claimed that naproxen itself was protective against heart attacks, a point that had not really been proved and was questioned by outside scientists at the time. The drug company that was funding the trial, the manufacturer of rofecoxib, contacted researchers who were conducting other studies with their drug to suggest that patients could use low-dose aspirin with it for cardioprotection if required. The Federal Drug Agency (FDA), in February 2002, added a warning label to rofecoxib packaging that it may increase your risk of heart attacks and strokes, but the drug was still available to the public.
As this possible link between rofecoxib and an increased risk of heart attacks and strokes became apparent in 2000, another study was getting under way to look at whether this same drug could help to prevent bowel polyps and cancers.5 The researchers in that trial, which was funded by the same drug company (the APPROVE study), found that rofecoxib was associated with an increase in cardiovascular risk. Researchers took the preliminary results to the drug company in September 2004, the trial was stopped and the drug was withdrawn from the market immediately. Meanwhile, the drug company had benefited from a $US2.5billion revenue from rofecoxib sales in the year before the withdrawal. The FDA estimates that the drug caused between 88,000 and 139,000 heart attacks, 30–40 per cent of which were probably fatal, in the 5 years during which the drug was on the market. There have been over 100,000 cases and 190 class actions lodged against the drug company concerned and millions of dollars have already been awarded to plaintiffs.
Here is where the story becomes even more interesting and rather murky. On 29 December 2005, the editors of the New England Journal of Medicine published an editorial claiming that data about three extra heart attack cases had been withheld from the 2000 New England Journal of Medicine article.6 The editors had become aware of these extra data when the FDA hearing occurred in February 2001, but had assumed that these heart attacks had occurred after the paper had been published in their journal and that the information was accurate when it had gone to press. However, after a drug company memorandum was subpoenaed for a court case in late 2005, it emerged that at least two of the authors knew about these extra cases well before the article was published and should have adjusted the conclusions. All three of these heart attack sufferers were people who did not need aspirin, thereby dispelling the original claim that. if rofecoxib were taken with low-dose aspirin in those who needed cardioprotection, all would be well.
Sadly, this is not the end of the story about misrepresentation of results from drug company-funded trials on rofecoxib. Not only do the claims of the VIGOR paper appear to be misleading, but there have also been doubts raised and a subsequent correction of the APPROVE trial results.7 The APPROVE study had randomised 2586 people with a history of bowel polyps to receive rofecoxib or placebo. The trial stopped after 18 months when it appeared that the drug caused a doubling in the risk of heart attacks and strokes. As the drug company defended themselves against claims of wrongful death, they maintained that there was no increased risk until after 18 months of using the drug. In July 2006, the New England Journal of Medicine published a correction to its paper over 12 months after it had been published in March 2005.7 In this latest controversy it emerged that people who dropped out of the VIGOR study early were not included in the original analysis. By omitting them, they underestimated the number of people who had earlier heart attacks while taking rofecoxib. The corrected analysis shows that the risk may increase as early as 4 months, and definitely long before the previously claimed 18 months.
At the time of revising this book, the rofecoxib story was still unfolding and we can only hope that many salutary lessons can be learned by journal editors, doctors and their patients about the pitfalls of relying upon trials that have been funded by drug companies.
Practitioners may not take account of their patients’ preferences
Over the last few decades, there has developed an appreciation that many interventions have significant harms; not all people weigh benefits and harms in the same way, and in the end it is the patient’s preferences that count, not the physician’s.
David Eddy8
Consumers should expect that everyone who offers health advice or who delivers health care should provide sound information about the benefits of the intervention – whether a tablet, surgery or dietary changes – and the harms. Then you will be in a position to decide, with your practitioner’s help, how these benefits and harms weigh up for you.
Practitioners do not always take their patients’ preferences into account. This is often easier said than done and in some circumstances not practical or appropriate. It is often difficult to find out patient preferences in an emergency situation and, in some special circumstances, the law requires a doctor to overrule an individual’s preferences if it puts him or her and/or others in danger. For example, an elderly person with poor eyesight and mild dementia may prefer to continue driving a car, but for obvious safety reasons this needs to be overruled. In most cases, a patient’s preferences can and should be included in healthcare decisions. Later in this book we consider some tools that are available to help people become much more involved in healthcare decisions and help them to weigh up the benefits and harms of healthcare options.
The fact is that not all people weigh benefits and harms in the same way. One person might consider a risk to be minor, although someone else might judge it unacceptable. As you become more informed about the evidence for different treatments and tests you may want your own preferences to be taken into account when weighing up the risks and benefits of a particular intervention. You should feel confident that your practitioner is considering YOUR preferences in decision-making, rather than other factors, such as what they have traditionally done in such a situation. The best way of finding the most appropriate balance between risks and benefits of health care is by choosing a practitioner who uses an evidence-based approach to health care and whom you feel comfortable questioning when making health decisions.
You should feel comfortable enough with your practitioner to ask whether any randomised controlled trials or systematic reviews of the best randomised trials have been done on a particular therapy. Remember, these are studies that are best able to evaluate the risks and benefits of an intervention because people in the study are randomly allocated to the treatment, an alternative treatment or placebo. Practitioners should try to run their practices so that they have sufficient time to attend to patients’ questions, and there is no reason for competent practitioners to feel irritated or intimidated by reasonable questions from patients; on the contrary, they should encourage them. It may mean that they look something up for you if they have time during the consultation or, if they have a full waiting room beckoning their attention, they may get back to you at a later stage.
Given what you have read so far, about the rapid pace of expanding medical knowledge, you should feel reassured rather than perturbed if your health practitioner looks something up for you. They may even ask you to do some reading yourself and perhaps point you towards some evidence-based resources for patients. As patients quite rightly want to become more involved in their healthcare decisions, the role of the practitioner will change and this is already starting to happen.
If any practitioner is too busy to answer your questions clearly or fails to help you find the evidence that you want, perhaps he or she is not the one to consult. And remember, ‘practitioner’ refers to anyone delivering any form of healthcare, whether a specialist, homeopath, dentist, nurse or counsellor.
Summary
Health and medical experts don’t always get it right.
- They vary in their opinions and approaches to managing the same conditions. Their ability to assess and interpret health information may not have kept pace with the rapidly expanding amount of such information.
- Their views may be based on unreliable sources – pharmaceutical companies, the opinions of other experts, media reports and their own personal experience – rather than the results of good quality studies.
- It is your right that your health care is based on:
- –
your practitioner’s clinical skills
- –
the best evidence from the research literature
- –
your preferences based on the benefits and harms.
References
- 1.
- Eddy D. Medicine, money and mathematics. Am Coll Surg Bull. 1992;77:48. [PubMed: 10118543]
- 2.
- Glasziou P, Haynes B. The paths from research to improved health outcomes. ACP J Club. 2005;142(2):A-8–10. [PubMed: 15739973]
- 3.
- Eddy D. Medicine, money and mathematics. Am Coll Surg Bull. 1992;77:36. [PubMed: 10118543]
- 4.
- Bombardier C, Laine L, Reicin A, et al. Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis. N Engl J Med. 2000;343:1520–8. [PubMed: 11087881]
- 5.
- Bresalier R, Sandler R, Quan H, et al. Cardiovascular events associated with rofecoxib in a colorectal adenoma chemoprevention trial. N Engl J Med. 2005;352:1092–102. [PubMed: 15713943]
- 6.
- Curfman G, Morrissey S, Drazen J. Expression of concern: Bombardier et al. ‘Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis. N Engl J Med 2000;343:1520–8’ N Engl J Med. 2005;353:2813–14. [PubMed: 16339408]
- 7.
- Correction. Correction to Bresalier et al. ‘Cardiovascular events associated with rofecoxib in a colorectal adenoma chemoprevention trial. N Engl J Med 2005;352:1092–102’ N Engl J Med. 2006;355 [PubMed: 15713943]
- 8.
- Eddy D. Assessing Health Practices and Designing Practice Policies. American College of Physicians; 1992.
- PubMedLinks to PubMed
- Don’t always rely on the experts - Smart Health ChoicesDon’t always rely on the experts - Smart Health Choices
Your browsing activity is empty.
Activity recording is turned off.
See more...