Display Settings:

Format

Send to:

Choose Destination
See comment in PubMed Commons below
J Clin Epidemiol. 2006 Jul;59(7):697-703. Epub 2006 Mar 15.

Single data extraction generated more errors than double data extraction in systematic reviews.

Author information

  • 1Department of Pediatrics, University of Alberta/Capital Health Evidence-Based Practice Centre, Edmonton, Alberta T6G 2J3, Canada. nina.buscemi@ualberta.ca

Abstract

BACKGROUND AND OBJECTIVE:

To conduct a pilot study to compare the frequency of errors that accompany single vs. double data extraction, compare the estimate of treatment effect derived from these methods, and compare the time requirements for these methods.

METHODS:

Reviewers were randomized to the role of data extractor or data verifier, and were blind to the study hypothesis. The frequency of errors associated with each method of data extraction was compared using the McNemar test. The data set for each method was used to calculate an efficacy estimate by each method, using standard meta-analytic techniques. The time requirement for each method was compared using a paired t-test.

RESULTS:

Single data extraction resulted in more errors than double data extraction (relative difference: 21.7%, P = .019). There was no substantial difference between methods in effect estimates for most outcomes. The average time spent for single data extraction was less than the average time for double data extraction (relative difference: 36.1%, P = .003).

CONCLUSION:

In the case that single data extraction is used in systematic reviews, reviewers and readers need to be mindful of the possibility for more errors and the potential impact these errors may have on effect estimates.

PMID:
16765272
[PubMed - indexed for MEDLINE]
PubMed Commons home

PubMed Commons

0 comments
How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for Elsevier Science
    Loading ...
    Write to the Help Desk