4.4 Article

A unified performance analysis of likelihood-informed subspace methods

Journal

BERNOULLI
Volume 28, Issue 4, Pages 2788-2815

Publisher

INT STATISTICAL INST
DOI: 10.3150/21-BEJ1437

Keywords

Dimension reduction; approximation error; likelihood informed subspace; Monte Carlo estimation

Funding

  1. MOE Academic Research Funds [R-146-000-292-114]
  2. Australian Research Council [DP210103092]

Ask authors/readers for more resources

The likelihood-informed subspace (LIS) method provides a way to reduce the dimensionality of high-dimensional probability distributions for Bayesian inference. This study establishes a unified framework to analyze the accuracy of dimension reduction techniques and the integration with sampling methods. The results demonstrate the effectiveness and applicability of the LIS method in various scenarios.
The likelihood-informed subspace (LIS) method offers a viable route to reducing the dimensionality of high -dimensional probability distributions arising in Bayesian inference. LIS identifies an intrinsic low-dimensional lin-ear subspace where the target distribution differs the most from some tractable reference distribution. Such a sub-space can be identified using the leading eigenvectors of a Gram matrix of the gradient of the log-likelihood func-tion. Then, the original high-dimensional target distribution is approximated through various forms of marginal-ization of the likelihood function, in which the approximated likelihood only has support on the intrinsic low -dimensional subspace. This approximation enables the design of inference algorithms that can scale sub-linearly with the apparent dimensionality of the problem. Intuitively, the accuracy of the approximation, and hence the performance of the inference algorithms, are influenced by three factors???the dimension truncation error in iden-tifying the subspace, Monte Carlo error in estimating the Gram matrices, and Monte Carlo error in constructing marginalizations. This work establishes a unified framework to analyze each of these three factors and their in-terplay. Under mild technical assumptions, we establish error bounds for a range of existing dimension reduction techniques based on the principle of LIS. Our error bounds also provide useful insights into the accuracy of these methods. In addition, we analyze the integration of LIS with sampling methods such as Markov Chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC). We also demonstrate the applicability of our analysis on a linear inverse problem with Gaussian prior, which shows that all the estimates can be dimension-independent if the prior covariance is a trace-class operator. Finally, we demonstrate various aspects of our theoretical claims on two nonlinear inverse problems.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available