Journal
MULTIVARIATE BEHAVIORAL RESEARCH
Volume 57, Issue 4, Pages 603-619Publisher
ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD
DOI: 10.1080/00273171.2021.1889946
Keywords
Mixed-effects models; crossed random effects; random slopes; model selection; model averaging; ML; REML
Ask authors/readers for more resources
Model selection and model averaging were found to have similar effects when dealing with random effects. Sample size and variance of random slopes explained the bias in SE of fixed effects. Estimator choice impacted SE bias differently.
A good deal of experimental research is characterized by the presence of random effects on subjects and items. A standard modeling approach that includes such sources of variability is the mixed-effects models (MEMs) with crossed random effects. However, under-parameterizing or over-parameterizing the random structure of MEMs bias the estimations of the Standard Errors (SEs) of fixed effects. In this simulation study, we examined two different but complementary perspectives: model selection with likelihood-ratio tests, AIC, and BIC; and model averaging with Akaike weights. Results showed that true model selection was constant across the different strategies examined (including ML and REML estimators). However, sample size and variance of random slopes were found to explain true model selection and SE bias of fixed effects. No relevant differences in SE bias were found for model selection and model averaging. Sample size and variance of random slopes interacted with the estimator to explain SE bias. Only the within-subjects effect showed significant underestimation of SEs with smaller number of items and larger item random slopes. SE bias was higher for ML than REML, but the variability of SE bias was the opposite. Such variability can be translated into high rates of unacceptable bias in many replications.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available