Format

Send to

Choose Destination
Behav Modif. 2016 Nov;40(6):874-900. Epub 2016 Apr 28.

Reliability, Validity, and Usability of Data Extraction Programs for Single-Case Research Designs.

Author information

1
State University of New York, Albany, USA The City University of New York, New York City, USA mmoeyaert@albany.edu.
2
University of Illinois at Chicago, USA.
3
The City University of New York, New York City, USA.

Abstract

Single-case experimental designs (SCEDs) have been increasingly used in recent years to inform the development and validation of effective interventions in the behavioral sciences. An important aspect of this work has been the extension of meta-analytic and other statistical innovations to SCED data. Standard practice within SCED methods is to display data graphically, which requires subsequent users to extract the data, either manually or using data extraction programs. Previous research has examined issues of reliability and validity of data extraction programs in the past, but typically at an aggregate level. Little is known, however, about the coding of individual data points. We focused on four different software programs that can be used for this purpose (i.e., Ungraph, DataThief, WebPlotDigitizer, and XYit), and examined the reliability of numeric coding, the validity compared with real data, and overall program usability. This study indicates that the reliability and validity of the retrieved data are independent of the specific software program, but are dependent on the individual single-case study graphs. Differences were found in program usability in terms of user friendliness, data retrieval time, and license costs. Ungraph and WebPlotDigitizer received the highest usability scores. DataThief was perceived as unacceptable and the time needed to retrieve the data was double that of the other three programs. WebPlotDigitizer was the only program free to use. As a consequence, WebPlotDigitizer turned out to be the best option in terms of usability, time to retrieve the data, and costs, although the usability scores of Ungraph were also strong.

KEYWORDS:

inter-observer agreement; intra-observer agreement; multiple-baseline design; reliability; usability; validity

PMID:
27126988
DOI:
10.1177/0145445516645763
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Atypon
Loading ...
Support Center