From: Conclusions

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

In our empirical comparison, bivariate meta-analyses produced point estimates that were largely similar to those of separate univariate analyses (also observed elsewhere^{37}). Because bivariate methods account for the correlation between the sensitivity and specificity across studies, the confidence region around the summary point is different from the univariate analyses, and the same is true for predictive distributions for future studies. Our findings suggest that this correlation is generally poorly estimated; however, bivariate models have stronger theoretical motivation for most common diagnostic test meta-analysis scenarios.

Based on large sample theory, the normal approximation is inadequate when the sample size of included studies is small and the sensitivity or specificity of tests is extreme (very high or very low). We found that continuity corrections (required for the normal approximation) introduced bias in meta-analytic estimates. This is consistent with simulation studies suggesting that meta-analysis using the exact binomial likelihood outperforms methods relying on the normal approximation.^{35} The normal approximation could be reserved for cases where the model using the exact likelihood cannot be fit (e.g., inability to converge), or there is no access to statistical software able to fit generalized linear mixed models.

Bayesian methods are theoretically appealing and allow for more flexible modeling, particularly when complex data structures arise. Further, they allow use of external information in the form of informative prior distributions. In our empirical assessment point estimates of sensitivity and specificity produced by the two methods were very similar; however, Bayesian methods often resulted in credibility intervals that were wider compared to the confidence intervals of maximum likelihood methods. This reflects the Bayesian models' ability to model the uncertainty in the estimation of variance parameters more completely.

Meta-analyses of sensitivity and specificity aim to provide helpful summaries of the findings of individual studies. Sometimes a helpful way to summarize individual studies is to provide one “summary point” of combined sensitivity and specificity. For example, a summary point is helpful when the results of the studies are relatively similar, and when the studied tests do not have different explicit thresholds for positive results. Other times, it is more helpful to synthesize data using a “summary line” that describes how sensitivity changes with specificity. For example, a summary line may be a more helpful way to synthesize data when studies have different explicit thresholds and their results range widely. Choosing the most helpful summary is subjective and case dependent, and both summaries can be reasonably employed as they provide complementary information.

We found that alternative parameterizations of the SROC curve derived from the bivariate model can occasionally result in curves of different shape. Specifically, some parameterizations can result in negative estimated slopes when the correlation between sensitivity and specificity is positive (i.e., when the correlation between sensitivity and false positive rate is negative).^{28}^{,}^{29} In such cases the relationship between sensitivity and specificity cannot be explained by varying thresholds for positive test results across studies. Some authors argue that such SROC curves are not a helpful summary of the data.^{37}

The standard bivariate/HSROC models will not be appropriate for all diagnostic settings, for example when test results are reported for multiple thresholds within each study or when the classification problem is not binary. In such cases more complex modeling approaches are necessary to obtain correct estimates of test accuracy.^{40}^{-}^{42}

From: Conclusions

An Empirical Assessment of Bivariate Methods for Meta-Analysis of Test Accuracy [Internet].

Dahabreh IJ, Trikalinos TA, Lau J, et al.

Rockville (MD): Agency for Healthcare Research and Quality (US); 2012 Nov.

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.