4.5 Article

The effect of missing data on design efficiency in repeated cross-sectional multi-period two-arm parallel cluster randomized trials

Journal

BEHAVIOR RESEARCH METHODS
Volume 53, Issue 4, Pages 1731-1745

Publisher

SPRINGER
DOI: 10.3758/s13428-020-01529-7

Keywords

Cluster randomization; Dropout; Intermittently missing observations; Efficiency

Ask authors/readers for more resources

This paper investigates the impact of missing data on design efficiency in multi-period trials, finding that efficiency increases with the number of subjects per day and number of weeks, but decreases due to missing data and dropout, especially when there are few subjects per day and many weeks.
The reduced efficiency of the cluster randomized trial design may be compensated by implementing a multi-period design. The trial then becomes longitudinal, with a risk of intermittently missing observations and dropout. This paper studies the effect of missing data on design efficiency in trials where the periods are the days of the week and clusters are followed for at least one week. The multilevel model with a decaying correlation structure is used to relate outcome to period and treatment condition. The variance of the treatment effect estimator is used to measure efficiency. When there is no data loss, efficiency increases with increasing number of subjects per day and number of weeks. Different weekly measurement schemes are used to evaluate the impact of planned missing data designs: the loss of efficiency due to measuring on fewer days is largest for few subjects per day and few weeks. Dropout is modeled by the Weibull survival function. The loss of efficiency due to dropout increases when more clusters drop out during the course of the trial, especially if the risk of dropout is largest at the beginning of the trial. The largest loss is observed for few subjects per day and a large number of weeks. An example of the effect of waiting room environments in reducing stress in dental care shows how different design options can be compared. An R Shiny app allows researchers to interactively explore various design options and to choose the best design for their trial.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available