4.6 Article

Variationally Inferred Sampling through a Refined Bound

期刊

ENTROPY
卷 23, 期 1, 页码 -

出版社

MDPI
DOI: 10.3390/e23010123

关键词

variational inference; MCMC; stochastic gradients; neural networks

资金

  1. MINECO [MTM2017-86875-C3-1-R]
  2. AXA-ICMAT Chair in Adversarial Risk Analysis
  3. Severo Ochoa Excellence Program [CEX2019-000904-S]
  4. National Science Foundation [DMS-1638521]
  5. BBVA Foundation project
  6. [FPU16-05034]

向作者/读者索取更多资源

This work introduces a framework to enhance the efficiency of Bayesian inference by embedding a Markov chain sampler within a variational posterior approximation, termed as refined variational approximation. The framework's strengths lie in its ease of implementation, automatic tuning of sampler parameters, and faster mixing time through automatic differentiation. Experimental results demonstrate its efficient performance in state-space models, variational encoder, and conditional variational autoencoder.
In this work, a framework to boost the efficiency of Bayesian inference in probabilistic models is introduced by embedding a Markov chain sampler within a variational posterior approximation. We call this framework refined variational approximation. Its strengths are its ease of implementation and the automatic tuning of sampler parameters, leading to a faster mixing time through automatic differentiation. Several strategies to approximate evidence lower bound (ELBO) computation are also introduced. Its efficient performance is showcased experimentally using state-space models for time-series data, a variational encoder for density estimation and a conditional variational autoencoder as a deep Bayes classifier.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据