4.7 Article

Residual-based error correction for neural operator accelerated infinite-dimensional Bayesian inverse problems

期刊

JOURNAL OF COMPUTATIONAL PHYSICS
卷 486, 期 -, 页码 -

出版社

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jcp.2023.112104

关键词

Uncertainty quantification; Partial differential equations; Machine learning; Neural networks; Operator learning; Error analysis

向作者/读者索取更多资源

This study explores the use of neural operators to accelerate infinite-dimensional Bayesian inverse problems with models governed by nonlinear parametric partial differential equations. While neural operators have the potential to drastically reduce computational costs, reducing approximation error in training can be challenging and unreliable. To address this, a strategy is proposed that corrects the prediction of trained neural operators by solving a linear variational problem based on the PDE residual, achieving improved accuracy without sacrificing computational speed.
We explore using neural operators, or neural network representations of nonlinear maps between function spaces, to accelerate infinite-dimensional Bayesian inverse problems (BIPs) with models governed by nonlinear parametric partial differential equations (PDEs). Neural operators have gained significant attention in recent years for their ability to approximate the parameter-to-solution maps defined by PDEs using as training data solutions of PDEs at a limited number of parameter samples. The computational cost of BIPs can be drastically reduced if the large number of PDE solves required for posterior characterization are replaced with evaluations of trained neural operators. However, reducing error in the resulting BIP solutions via reducing the approximation error of the neural operators in training can be challenging and unreliable. We provide an a priori error bound result that implies certain BIPs can be ill-conditioned to the approximation error of neural operators, thus leading to inaccessible accuracy requirements in training. To reliably deploy neural operators in BIPs, we consider a strategy for enhancing the performance of neural operators: correcting the prediction of a trained neural operator by solving a linear variational problem based on the PDE residual. We show that a trained neural operator with error correction can achieve a quadratic reduction of its approximation error, all while retaining substantial computational speedups of posterior sampling when models are governed by highly nonlinear PDEs. The strategy is applied to two numerical examples of BIPs based on a nonlinear reaction-diffusion problem and deformation of hyperelastic materials. We demonstrate that posterior representations of the two BIPs produced using trained neural operators are greatly and consistently enhanced by error correction. & COPY; 2023 Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据