4.5 Article

Assessment and Adjustment of Approximate Inference Algorithms Using the Law of Total Variance

Journal

JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS
Volume 30, Issue 4, Pages 977-990

Publisher

TAYLOR & FRANCIS INC
DOI: 10.1080/10618600.2021.1880921

Keywords

Deep neural network; Gaussian process; Law of total variance; Likelihood-free inference; Markov chain Monte Carlo; Variational approximation

Funding

  1. German Research Foundation (DFG) [KL 3037/1-1]

Ask authors/readers for more resources

The article introduces a moment-based alternative method for assessing and adjusting approximate inference methods by relating prior and posterior expectations and covariances. The method adjusts approximate inferences to maintain correct prior to posterior relationships. Examples include using an auxiliary model in likelihood-free inference, corrections for variational Bayes approximations in a deep neural network GLMM, and using a deep neural network surrogate for approximating Gaussian process regression predictive inference.
A common method for assessing validity of Bayesian sampling or approximate inference methods makes use of simulated data replicates for parameters drawn from the prior. Under continuity assumptions, quantiles of functions of the simulated parameter values for corresponding posterior distributions are uniformly distributed. Checking for uniformity when a posterior density is approximated numerically provides a diagnostic for algorithm validity. Furthermore, adjustments to achieve uniformity can improve the quality of approximate inference methods. The present article develops a moment-based alternative to the conventional checking and adjustment methods using quantiles. The new approach relates prior and posterior expectations and covariances through the tower property of conditional expectation and the law of total variance. For adjustment, approximate inferences are modified so that the correct prior to posterior relationships hold. We illustrate the method in three examples. The first uses an auxiliary model in a likelihood-free inference problem. The second considers corrections for variational Bayes approximations in a deep neural network generalized linear mixed model. Our final application considers a deep neural network surrogate for approximating Gaussian process regression predictive inference. Supplementary files for this article are available online.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available