4.6 Review

Variational Inference: A Review for Statisticians

期刊

JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
卷 112, 期 518, 页码 859-877

出版社

TAYLOR & FRANCIS INC
DOI: 10.1080/01621459.2017.1285773

关键词

Algorithms; Computationally intensive methods; Statistical computing

资金

  1. Div Of Information & Intelligent Systems
  2. Direct For Computer & Info Scie & Enginr [1502780] Funding Source: National Science Foundation

向作者/读者索取更多资源

One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this article, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find a member of that family which is close to the target density. Closeness is measured by KullbackLeibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this article is to catalyze statistical research on this class of algorithms. Supplementary materials for this article are available online.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据