4.4 Article

Under-Fitting and Over-Fitting: The Performance of Bayesian Model Selection and Fit Indices in SEM

Publisher

ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD
DOI: 10.1080/10705511.2023.2280952

Keywords

Approximate model fit; Bayesian model comparison; BIC; DIC

Ask authors/readers for more resources

By conducting a simulation study using confirmatory factor analysis, we extended current knowledge by examining the performance of several Bayesian model fit and comparison indices. Our findings provide practical advice for applied researchers on how to assess and compare models using these common indices implemented in the Bayesian framework.
We extended current knowledge by examining the performance of several Bayesian model fit and comparison indices through a simulation study using the confirmatory factor analysis. Our goal was to determine whether commonly implemented Bayesian indices can detect specification errors. Specifically, we wanted to uncover any differences in detecting under-fitting or over-fitting a model. We examined a conventional Bayesian fit index (the posterior predictive p-value), approximate Bayesian fit indices (Bayesian RMSEA, CFI, and TLI), and model comparison indices (BIC and DIC). We varied the type and severity of model mis-specification, sample size, and priors. We focused on the ability of these indices to detect model under- or over-fitting. We provide practical advice for applied researchers regarding how to assess and compare models using these common indices implemented in the Bayesian framework.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available