Format

Send to

Choose Destination
Syst Rev. 2015 Nov 17;4:163. doi: 10.1186/s13643-015-0144-x.

Do alternative methods for analysing count data produce similar estimates? Implications for meta-analyses.

Author information

1
Department of Preventive and Social Medicine, Dunedin School of Medicine, University of Otago, PO Box 913, Dunedin, 9054, New Zealand. peter.herbison@otago.ac.nz.
2
Department of Medicine, Dunedin School of Medicine, University of Otago, PO Box 913, Dunedin, 9054, New Zealand. clare.robertson@otago.ac.nz.
3
School of Public Health and Preventive Medicine, Monash University, The Alfred Centre, 99 Commercial Road, Melbourne, Victoria, 3004, Australia. joanne.mckenzie@monash.edu.

Abstract

BACKGROUND:

Many randomised trials have count outcomes, such as the number of falls or the number of asthma exacerbations. These outcomes have been treated as counts, continuous outcomes or dichotomised and analysed using a variety of analytical methods. This study examines whether different methods of analysis yield estimates of intervention effect that are similar enough to be reasonably pooled in a meta-analysis.

METHODS:

Data were simulated for 10,000 randomised trials under three different amounts of overdispersion, four different event rates and two effect sizes. Each simulated trial was analysed using nine different methods of analysis: rate ratio, Poisson regression, negative binomial regression, risk ratio from dichotomised data, survival to the first event, two methods of adjusting for multiple survival times, ratio of means and ratio of medians. Individual patient data was gathered from eight fall prevention trials, and similar analyses were undertaken.

RESULTS:

All methods produced similar effect sizes when there was no difference between treatments. Results were similar when there was a moderate difference with two exceptions when the event became more common: (1) risk ratios computed from dichotomised count outcomes and hazard ratios from survival analysis of the time to the first event yielded intervention effects that differed from rate ratios estimated from the negative binomial model (reference model) and (2) the precision of the estimates differed depending on the method used, which may affect both the pooled intervention effect and the observed heterogeneity. The results of the case study of individual data from eight trials evaluating exercise programmes to prevent falls in older people supported the simulation study findings.

CONCLUSIONS:

Information about the differences in treatments is lost when event rates increase and the outcome is dichotomised or time to the first event is analysed otherwise similar results are obtained. Further research is needed to examine the effect of differing variances from the different methods on the confidence intervals of pooled estimates.

PMID:
26577545
PMCID:
PMC4650317
DOI:
10.1186/s13643-015-0144-x
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for BioMed Central Icon for PubMed Central
Loading ...
Support Center