Send to

Choose Destination
J Gen Intern Med. 2018 Aug;33(8):1260-1267. doi: 10.1007/s11606-018-4425-7. Epub 2018 Apr 16.

Empirical Comparison of Publication Bias Tests in Meta-Analysis.

Author information

Department of Statistics, Florida State University, Tallahassee, FL, USA.
Division of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN, USA.
Evidence-Based Practice Center, Mayo Clinic, Rochester, MN, USA.
Department of Biostatistics, Harvard School of Public Health, Boston, MA, USA.
School of Social Development and Public Policy, Beijing Normal University, Beijing, China.
Department of Epidemiology, UNC Gillings School of Global Public Health, Chapel Hill, NC, USA.
Department of Biostatistics and Epidemiology, University of Pennsylvania, Philadelphia, PA, USA.



Decision makers rely on meta-analytic estimates to trade off benefits and harms. Publication bias impairs the validity and generalizability of such estimates. The performance of various statistical tests for publication bias has been largely compared using simulation studies and has not been systematically evaluated in empirical data.


This study compares seven commonly used publication bias tests (i.e., Begg's rank test, trim-and-fill, Egger's, Tang's, Macaskill's, Deeks', and Peters' regression tests) based on 28,655 meta-analyses available in the Cochrane Library.


Egger's regression test detected publication bias more frequently than other tests (15.7% in meta-analyses of binary outcomes and 13.5% in meta-analyses of non-binary outcomes). The proportion of statistically significant publication bias tests was greater for larger meta-analyses, especially for Begg's rank test and the trim-and-fill method. The agreement among Tang's, Macaskill's, Deeks', and Peters' regression tests for binary outcomes was moderately strong (most κ's were around 0.6). Tang's and Deeks' tests had fairly similar performance (κ > 0.9). The agreement among Begg's rank test, the trim-and-fill method, and Egger's regression test was weak or moderate (κ < 0.5).


Given the relatively low agreement between many publication bias tests, meta-analysts should not rely on a single test and may apply multiple tests with various assumptions. Non-statistical approaches to evaluating publication bias (e.g., searching clinical trials registries, records of drug approving agencies, and scientific conference proceedings) remain essential.


Cochrane Library; funnel plot; meta-analysis; publication bias; statistical test

Supplemental Content

Full text links

Icon for Springer Icon for PubMed Central
Loading ...
Support Center