Journal
JOURNAL OF STATISTICAL PLANNING AND INFERENCE
Volume 209, Issue -, Pages 174-186Publisher
ELSEVIER
DOI: 10.1016/j.jspi.2020.03.008
Keywords
Credible interval; Gibbs posterior; Generalized bayesian inference; Model misspecification; Robustness
Categories
Funding
- U.S. National Science Foundation [DMS-1811802]
Ask authors/readers for more resources
The area under the receiver operating characteristic curve (AUC) serves as a summary of a binary classifier's performance. For inference on the AUC, a common modeling assumption is binormality, which restricts the distribution of the score produced by the classifier. However, this assumption introduces an infinite-dimensional nuisance parameter and may be restrictive in certain machine learning settings. To avoid making distributional assumptions, and to avoid the computational challenges of a fully nonparametric analysis, we develop a direct and model-free Gibbs posterior distribution for inference on the AUC. We present the asymptotic Gibbs posterior concentration rate, and a strategy for tuning the learning rate so that the corresponding credible intervals achieve the nominal frequentist coverage probability. Simulation experiments and a real data analysis demonstrate the Gibbs posterior's strong performance compared to existing Bayesian methods. (C) 2020 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available