Journal
TEST
Volume 31, Issue 3, Pages 771-789Publisher
SPRINGER
DOI: 10.1007/s11749-022-00803-4
Keywords
p-Value calibration; Bayes factor; Linear model; Likelihood ratio; Adaptive alpha; PBIC
Categories
Funding
- NIH [U54CA096300, P20GM103475, R25MD010399]
Ask authors/readers for more resources
We propose an adaptive method (type I error) that reduces as information increases for hypothesis tests comparing nested linear models. A simpler adaptation was presented in Perez and Pericchi (Stat Probab Lett 85:20-24, 2014) for general i.i.d. models. The calibration proposed in this paper can be interpreted as a compromise between Bayesian and non-Bayesian approaches, achieving statistical consistency through a simple translation of the Bayes factor into frequentist terms and, more importantly, promoting replicable scientific findings.
We put forward an adaptive a (type I error) that decreases as the information grows for hypothesis tests comparing nested linear models. A less elaborate adaptation was presented in Perez and Pericchi (Stat Probab Lett 85:20-24, 2014) for general i.i.d. models. The calibration proposed in this paper may be interpreted as a Bayes-nonBayes compromise, of a simple translation of a Bayes factor on frequentist terms that leads to statistical consistency, and most importantly, it is a step toward statistics that promotes replicable scientific findings.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available