4.7 Article

Improving computational efficiency in large linear inverse problems: an example from carbon dioxide flux estimation

Journal

GEOSCIENTIFIC MODEL DEVELOPMENT
Volume 6, Issue 3, Pages 583-590

Publisher

COPERNICUS GESELLSCHAFT MBH
DOI: 10.5194/gmd-6-583-2013

Keywords

-

Funding

  1. National Aeronautics and Space Administration [NNX12AB90G]
  2. National Science Foundation [1047871]
  3. Direct For Computer & Info Scie & Enginr [1047871] Funding Source: National Science Foundation
  4. Direct For Computer & Info Scie & Enginr
  5. Office of Advanced Cyberinfrastructure (OAC) [1342076] Funding Source: National Science Foundation
  6. Office of Advanced Cyberinfrastructure (OAC) [1047871] Funding Source: National Science Foundation

Ask authors/readers for more resources

Addressing a variety of questions within Earth science disciplines entails the inference of the spatiotemporal distribution of parameters of interest based on observations of related quantities. Such estimation problems often represent inverse problems that are formulated as linear optimization problems. Computational limitations arise when the number of observations and/or the size of the discretized state space becomes large, especially if the inverse problem is formulated in a probabilistic framework and therefore aims to assess the uncertainty associated with the estimates. This work proposes two approaches to lower the computational costs and memory requirements for large linear space-time inverse problems, taking the Bayesian approach for estimating carbon dioxide (CO2) emissions and uptake (a.k.a. fluxes) as a prototypical example. The first algorithm can be used to efficiently multiply two matrices, as long as one can be expressed as a Kronecker product of two smaller matrices, a condition that is typical when multiplying a sensitivity matrix by a covariance matrix in the solution of inverse problems. The second algorithm can be used to compute a posteriori uncertainties directly at aggregated spatiotemporal scales, which are the scales of most interest in many inverse problems. Both algorithms have significantly lower memory requirements and computational complexity relative to direct computation of the same quantities (O(n(2.5)) vs. O(n(3))). For an examined benchmark problem, the two algorithms yielded massive savings in floating point operations relative to direct computation of the same quantities. Sample computer codes are provided for assessing the computational and memory efficiency of the proposed algorithms for matrices of different dimensions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available