Format

Send to

Choose Destination
Health Serv Res. 2015 Aug;50(4):1211-35. doi: 10.1111/1475-6773.12270. Epub 2014 Dec 11.

Why We Should Not Be Indifferent to Specification Choices for Difference-in-Differences.

Author information

1
University of Michigan School of Public Health, 1415 Washington Heights, Ann Arbor, MI.
2
Veterans Affairs Boston Health Care System, US Department of Veteran Affairs, Boston University School of Public Health, Boston, MA.
3
Department of Surgery, School of Medicine University of Michigan, Center for Healthcare Outcomes and Policy, Ann Arbor, MI.

Abstract

OBJECTIVE:

To evaluate the effects of specification choices on the accuracy of estimates in difference-in-differences (DID) models.

DATA SOURCES:

Process-of-care quality data from Hospital Compare between 2003 and 2009.

STUDY DESIGN:

We performed a Monte Carlo simulation experiment to estimate the effect of an imaginary policy on quality. The experiment was performed for three different scenarios in which the probability of treatment was (1) unrelated to pre-intervention performance; (2) positively correlated with pre-intervention levels of performance; and (3) positively correlated with pre-intervention trends in performance. We estimated alternative DID models that varied with respect to the choice of data intervals, the comparison group, and the method of obtaining inference. We assessed estimator bias as the mean absolute deviation between estimated program effects and their true value. We evaluated the accuracy of inferences through statistical power and rates of false rejection of the null hypothesis.

PRINCIPAL FINDINGS:

Performance of alternative specifications varied dramatically when the probability of treatment was correlated with pre-intervention levels or trends. In these cases, propensity score matching resulted in much more accurate point estimates. The use of permutation tests resulted in lower false rejection rates for the highly biased estimators, but the use of clustered standard errors resulted in slightly lower false rejection rates for the matching estimators.

CONCLUSIONS:

When treatment and comparison groups differed on pre-intervention levels or trends, our results supported specifications for DID models that include matching for more accurate point estimates and models using clustered standard errors or permutation tests for better inference. Based on our findings, we propose a checklist for DID analysis.

KEYWORDS:

Hospitals; econometrics; health economics; health policy; quality of care

PMID:
25495529
PMCID:
PMC4545355
DOI:
10.1111/1475-6773.12270
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for Wiley Icon for PubMed Central
Loading ...
Support Center