#### Send to
jQuery(document).ready( function () {
jQuery("#send_to_menu input[type='radio']").click( function () {
var selectedValue = jQuery(this).val().toLowerCase();
var selectedDiv = jQuery("#send_to_menu div." + selectedValue);
if(selectedDiv.is(":hidden")){
jQuery("#send_to_menu div.submenu:visible").slideUp();
selectedDiv.slideDown();
}
});
});
jQuery("#sendto").bind("ncbipopperclose", function(){
jQuery("#send_to_menu div.submenu:visible").css("display","none");
jQuery("#send_to_menu input[type='radio']:checked").attr("checked",false);
});

# Correction for bias in meta-analysis of little-replicated studies.

### Author information

- 1
- Biological Sciences Institute for Life Sciences University of Southampton Southampton UK.
- 2
- Geography and Environment University of Southampton Southampton UK.

### Abstract

Meta-analyses conventionally weight study estimates on the inverse of their error variance, in order to maximize precision. Unbiased variability in the estimates of these study-level error variances increases with the inverse of study-level replication. Here, we demonstrate how this variability accumulates asymmetrically across studies in precision-weighted meta-analysis, to cause undervaluation of the meta-level effect size or its error variance (the meta-effect and meta-variance).Small samples, typical of the ecological literature, induce big sampling errors in variance estimation, which substantially bias precision-weighted meta-analysis. Simulations revealed that biases differed little between random- and fixed-effects tests. Meta-estimation of a one-sample mean from 20 studies, with sample sizes of 3-20 observations, undervalued the meta-variance by *c*. 20%. Meta-analysis of two-sample designs from 20 studies, with sample sizes of 3-10 observations, undervalued the meta-variance by 15%-20% for the log response ratio (ln*R*); it undervalued the meta-effect by *c*. 10% for the standardized mean difference (SMD).For all estimators, biases were eliminated or reduced by a simple adjustment to the weighting on study precision. The study-specific component of error variance prone to sampling error and not parametrically attributable to study-specific replication was replaced by its cross-study mean, on the assumptions of random sampling from the same population variance for all studies, and sufficient studies for averaging. Weighting each study by the inverse of this mean-adjusted error variance universally improved accuracy in estimation of both the meta-effect and its significance, regardless of number of studies. For comparison, weighting only on sample size gave the same improvement in accuracy, but could not sensibly estimate significance.For the one-sample mean and two-sample ln*R*, adjusted weighting also improved estimation of between-study variance by DerSimonian-Laird and REML methods. For random-effects meta-analysis of SMD from little-replicated studies, the most accurate meta-estimates obtained from adjusted weights following conventionally weighted estimation of between-study variance.We recommend adoption of weighting by inverse adjusted-variance for meta-analyses of well- and little-replicated studies, because it improves accuracy and significance of meta-estimates, and it can extend the scope of the meta-analysis to include some studies without variance estimates.

#### KEYWORDS:

Hedges’ d; Hedges’ g; fixed effect; inverse‐variance weighting; ln R; random effect; small sample

- PMID:
- 29938012
- PMCID:
- PMC5993351
- DOI:
- 10.1111/2041-210X.12927