Journal
SIAM JOURNAL ON SCIENTIFIC COMPUTING
Volume 36, Issue 2, Pages A588-A608Publisher
SIAM PUBLICATIONS
DOI: 10.1137/130920587
Keywords
preconditioning; sampling; Gaussian processes; covariance matrix; matrix square root; sparse approximate inverse; Krylov subspace methods; Lanczos process
Categories
Funding
- NSF [DMS-1216366]
- Direct For Computer & Info Scie & Enginr
- Office of Advanced Cyberinfrastructure (OAC) [1306573] Funding Source: National Science Foundation
- Direct For Mathematical & Physical Scien
- Division Of Mathematical Sciences [1216366] Funding Source: National Science Foundation
Ask authors/readers for more resources
A common problem in statistics is to compute sample vectors from a multivariate Gaussian distribution with zero mean and a given covariance matrix A. A canonical approach to the problem is to compute vectors of the form y = Sz, where S is the Cholesky factor or square root of A, and z is a standard normal vector. When A is large, such an approach becomes computationally expensive. This paper considers preconditioned Krylov subspace methods to perform this task. The Lanczos process provides a means to approximate A(1/2)z for any vector z from an m-dimensional Krylov subspace. The main contribution of this paper is to show how to enhance the convergence of the process via preconditioning. Both incomplete Cholesky preconditioners and approximate inverse preconditioners are discussed. It is argued that the latter class of preconditioners has an advantage in the context of sampling. Numerical tests, performed with stationary covariance matrices used to model Gaussian processes, illustrate the dramatic improvement in computation time that can result from preconditioning.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available