• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of bmjBMJ helping doctors make better decisionsSearch bmj.comLatest content
BMJ. Dec 2, 2000; 321(7273): 1362–1363.
PMCID: PMC1119102

Economic evaluation and clinical trials: size matters

The need for greater power in cost analyses poses an ethical dilemma
Andrew Briggs, Joint MRC/Southeast Region training fellow

Randomised trials of health care interventions are increasingly attempting to tackle issues of cost effectiveness as well as clinical effectiveness. A good example of this appears in the two papers describing the clinical1 and economic evaluation2 of psychological therapies in primary care in this issue of the BMJ (pp 1383,1389). The use of clinical trials as a vehicle for prospective cost effectiveness analysis presents challenges for successful evaluation, and the methods of conducting trial based economic evaluation are still in their infancy.

Several commentators have emphasised that health economists should be involved from the outset in the design of trials that seek to report on cost effectiveness,3 rather than being asked to add in the economic variables as an adjunct to the main trial (in a so called “piggyback” arrangement).4 The reason for this is because design considerations are different for clinical and economic analyses.

The tendency of resource use variables to follow a skewed distribution5 means that cost variables generally have higher variance than clinical outcomes. Furthermore, the fact that most new interventions involve resource shifting such that increased resource use in one area is offset by resource saving elsewhere makes the net cost of introducing such interventions unclear. Finally, many different categories of resource use may be involved, each with different unit cost weights and each showing varying degrees of difference between trial arms. Typically, therefore, comparisons of treatment cost will require greater sample sizes than the corresponding clinical comparison. If the goal of the study is to show that the resulting cost effectiveness ratio is significantly below some upper limit on the maximum society is willing to pay for health gain, then it is even more likely that the sample size requirements for economic evaluation will be many times those required to show a clinical effect.6

The consequence is that piggyback economic evaluations will typically be underpowered for both the cost analysis and any cost effectiveness analysis, even if the main clinical comparison is appropriately powered. The dangers of underpowering studies are well documented in the clinical literature,7 and this has led to the recommendation to use estimation rather than hypothesis testing when reporting results of clinical evaluations.8 Exactly the same principle should be used in economic evaluation. The evaluative technique of cost minimisation analysis is often used unthinkingly to select the least costly intervention when no statistically significant difference in health outcome is detected. Yet this use of cost minimisation is built on the sandy foundations of hypothesis testing and the mistaken assumption that “absence of evidence is evidence of absence.”9 Similarly, it is inappropriate, given the likely low power to detect cost differences in a piggyback study, to interpret a statistically significant difference in clinical effect and an insignificant cost difference as evidence of cost effectiveness.

For these reasons, and in common with the recommendation for clinical evaluation, the focus of cost effectiveness studies should be on estimating cost effectiveness, even when either cost or effect differences lack conventional statistical significance. Low powered studies will be revealed in the wide confidence limits around results, and readers will not be misled.

In this issue Bower et al report that their study was designed as a cost effectiveness analysis.2 However, they later report that there was no power calculation for costs, with the sample size for the study being determined by the main clinical outcome. Not surprisingly, therefore, it found no significant differences in cost between the treatments either at 4 or 12 months' follow up. As the authors emphasise, we must be careful in interpreting these results.

Health service decision makers will probably be most interested in the fact that, though there is no evidence of any long term treatment effect, the cost difference is not inconsistent with an additional cost to society of £458 for cognitive behaviour therapy or £952 for non-directive counselling, at conventional levels of significance. The authors chose not to present cost effectiveness results directly, although it is clear that any such estimate based on the data from this trial would have high variance.

Ideally, of course, studies that attempt to address economic questions should be powered on the economic variables. But then they would almost certainly be overpowered with respect to the clinical outcomes. Would this be a problem? Some might argue that the ethical basis of randomisation would be questionable and that it would be inappropriate to continue a trial beyond the point at which clinical superiority has been determined beyond reasonable doubt. Given current ethical committee guidance and the consent forms that patients sign on entering a clinical trial this is no doubt true. However, inquiry into the cost effectiveness of treatment interventions is a legitimate enterprise. Failure to recruit enough patients to give unequivocal treatment and policy recommendations could be seen as unethical, leading to delay in providing cost effective treatments, delay in curtailing cost ineffective treatments, and a consequent underachievement of potential health gain from available resources within the NHS.

Notes

General practice pp 1383, 1389

References

1. Ward E, King M, Lloyd M, Bower P, Sibbald B, Farrelly S, et al. Randomised controlled trial of non-directive counselling, cognitive-behaviour therapy, and usual general practitioner care for patients with depression. I: Clinical effectiveness. BMJ. 2000;321:1383–1388. [PMC free article] [PubMed]
2. Bower P, Byford S, Sibbald B, Ward E, King M, Lloyd M, et al. Randomised controlled trial of non-directive counselling, cognitive-behaviour therapy, and usual general practitioner care for patients with depression. II: Cost effectiveness. BMJ. 2000;321:1389–1392. [PMC free article] [PubMed]
3. Drummond M. Economic analysis alongside controlled trials. London: Department of Health; 1994.
4. O'Brien B. Economic evaluation of pharmaceuticals: Frankenstein's monster or vampire of trials? Medical Care. 1996;(suppl);34:DS99–108. [PubMed]
5. Briggs A, Gray A. The distribution of health care costs and their statistical analysis for economic evaluation. J Health Serv Res Policy. 1998;3:233–245. [PubMed]
6. Briggs AH, Gray AM. Sample size and power calculations for stochastic cost-effectiveness analysis. Med Decision Making. 1998;18 (suppl):S81–S92. [PubMed]
7. Freiman JA, Chalmers TC, Smith H, Jr, Kuebler RR. The importance of beta, the type II error and sample size in the design and interpretation of the randomized control trial. Survey of 71 “negative” trials. N Engl J Med. 1978;299:690–694. [PubMed]
8. Gardner MJ, Altman DG. Estimation rather than hypothesis testing: confidence intervals rather than P values. In: Gardner MJ, Altman DG, editors. Statistics with confidence. London: BMJ Books; 1989.
9. Altman DG, Bland JM. Statistics notes: Absence of evidence is not evidence of absence. BMJ. 1995;311:485. [PMC free article] [PubMed]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Group
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...