Journal
NEUROCOMPUTING
Volume 471, Issue -, Pages 260-274Publisher
ELSEVIER
DOI: 10.1016/j.neucom.2020.09.076
Keywords
Bayesian neural networks; Approximate inference; Alpha divergences; Adversarial variational Bayes
Categories
Funding
- Spanish Ministry of Economy [SEV-2015-0554-16-4]
- Spanish Plan Nacional I+D+i [TIN2016-76406-P]
- [PID2019-106827 GB-I00/AEI/10.13039/501100011033]
Ask authors/readers for more resources
Neural networks are state-of-the-art models for machine learning, but conventional methods have limitations in predicting uncertainty. This paper proposes a method for approximate Bayesian inference based on minimizing alpha-divergences, which allows for more flexible estimation of posterior distributions. Experiments show that this method may lead to better results in regression problems and remains competitive in classification problems.
Neural networks are state-of-the-art models for machine learning problems. They are often trained via back-propagation to find a value of the weights that correctly predicts the observed data. Back-propagation has shown good performance in many applications, however, it cannot easily output an estimate of the uncertainty in the predictions made. Estimating this uncertainty is a critical aspect with important applications. One method to obtain this information consists in following a Bayesian approach to obtain a posterior distribution of the model parameters. This posterior distribution summarizes which parameter values are compatible with the observed data. However, the posterior is often intractable and has to be approximated. Several methods have been devised for this task. Here, we propose a general method for approximate Bayesian inference that is based on minimizing alpha-divergences, and that allows for flexible approximate distributions. We call this method adversarial alpha-divergence minimization (AADM). We have evaluated AADM in the context of Bayesian neural networks. Extensive experiments show that it may lead to better results in terms of the test log-likelihood, and sometimes in terms of the squared error, in regression problems. In classification problems, however, AADM gives competitive results. (C) 2020 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available