4.1 Article

Increasing the replicability for linear models via adaptive significance levels

Journal

TEST
Volume 31, Issue 3, Pages 771-789

Publisher

SPRINGER
DOI: 10.1007/s11749-022-00803-4

Keywords

p-Value calibration; Bayes factor; Linear model; Likelihood ratio; Adaptive alpha; PBIC

Funding

  1. NIH [U54CA096300, P20GM103475, R25MD010399]

Ask authors/readers for more resources

We propose an adaptive method (type I error) that reduces as information increases for hypothesis tests comparing nested linear models. A simpler adaptation was presented in Perez and Pericchi (Stat Probab Lett 85:20-24, 2014) for general i.i.d. models. The calibration proposed in this paper can be interpreted as a compromise between Bayesian and non-Bayesian approaches, achieving statistical consistency through a simple translation of the Bayes factor into frequentist terms and, more importantly, promoting replicable scientific findings.
We put forward an adaptive a (type I error) that decreases as the information grows for hypothesis tests comparing nested linear models. A less elaborate adaptation was presented in Perez and Pericchi (Stat Probab Lett 85:20-24, 2014) for general i.i.d. models. The calibration proposed in this paper may be interpreted as a Bayes-nonBayes compromise, of a simple translation of a Bayes factor on frequentist terms that leads to statistical consistency, and most importantly, it is a step toward statistics that promotes replicable scientific findings.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available