4.2 Article

Asymptotic Analysis of Multilevel Best Linear Unbiased Estimators

期刊

出版社

SIAM PUBLICATIONS
DOI: 10.1137/20M1321607

关键词

uncertainty quantification; partial differential equation; Richardson extrapolation; Monte Carlo; multilevel Monte Carlo

资金

  1. Deutsche Forschungsgemeinschaft (DFG) through the International Research Training Group [IGDK 1754, 188264188/GRK1754]

向作者/读者索取更多资源

The study focuses on the computational complexity and variance of best linear unbiased estimators for PDE-based models, with a particular interest in sample allocation optimal best linear unbiased estimators (SAOBs). Results show that SAOBs have optimal complexity within a certain class of linear unbiased estimators, and their complexity is not higher than that of multilevel Monte Carlo methods.
We study the computational complexity and variance of multilevel best linear unbiased estimators introduced in [D. Schaden and E. Ullmann, SIAM/ASA J. Uncertain. Quantif., 8 (2020), pp. 601- 635]. We specialize the results in this work to PDE-based models that are parameterized by a discretization quantity, e.g., the finite element mesh size. In particular, we investigate the asymptotic complexity of the so-called sample allocation optimal best linear unbiased estimators (SAOBs). These estimators have the smallest variance given a fixed computational budget. However, SAOBs are defined implicitly by solving an optimization problem and are difficult to analyze. Alternatively, we study a class of auxiliary estimators based on the Richardson extrapolation of the parametric model family. This allows us to provide an upper bound for the complexity of the SAOBs, showing that their complexity is optimal within a certain class of linear unbiased estimators. Moreover, the complexity of the SAOBs is not larger than the complexity of multilevel Monte Carlo. The theoretical results are illustrated by numerical experiments with an elliptic PDE.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.2
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据