4.6 Article

Looking at the posterior: accuracy and uncertainty of neural-network predictions

期刊

出版社

IOP Publishing Ltd
DOI: 10.1088/2632-2153/ad0ab4

关键词

deep learning; uncertainty quantification; bayesian inference; neural networks; active learning

向作者/读者索取更多资源

Bayesian inference can quantify uncertainty in neural network predictions using posterior distributions, and we show how prediction accuracy is related to epistemic and aleatoric uncertainties. We also introduce a novel acquisition function that outperforms common methods.
Bayesian inference can quantify uncertainty in the predictions of neural networks using posterior distributions for model parameters and network output. By looking at these posterior distributions, one can separate the origin of uncertainty into aleatoric and epistemic contributions. One goal of uncertainty quantification is to inform on prediction accuracy. Here we show that prediction accuracy depends on both epistemic and aleatoric uncertainty in an intricate fashion that cannot be understood in terms of marginalized uncertainty distributions alone. How the accuracy relates to epistemic and aleatoric uncertainties depends not only on the model architecture, but also on the properties of the dataset. We discuss the significance of these results for active learning and introduce a novel acquisition function that outperforms common uncertainty-based methods. To arrive at our results, we approximated the posteriors using deep ensembles, for fully-connected, convolutional and attention-based neural networks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据