4.1 Article

Examining the Internal Validity and Statistical Precision of the Comparative Interrupted Time Series Design by Comparison With a Randomized Experiment

Journal

AMERICAN JOURNAL OF EVALUATION
Volume 35, Issue 3, Pages 311-327

Publisher

SAGE PUBLICATIONS INC
DOI: 10.1177/1098214014527337

Keywords

interrupted time series; educational evaluation; within-study comparison; randomized clinical trial

Funding

  1. Direct For Education and Human Resources
  2. Division Of Graduate Education [1228866] Funding Source: National Science Foundation

Ask authors/readers for more resources

Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment but a nonequivalent comparison group that is assessed at six time points before treatment. We estimate program effects with and without matching of the comparison schools, and we also systematically vary the number of pretest time points in the analysis. CITS designs produce impact estimates that are extremely close to the experimental benchmarks and, as implemented here, do so equally well with and without matching. Adding time points provides an advantage so long as the pretest trend differences in the treatment and comparison groups are correctly modeled. Otherwise, more time points can increase bias.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available