4.7 Article

Derivative-Informed Neural Operator: An efficient framework for high-dimensional parametric derivative learning

期刊

JOURNAL OF COMPUTATIONAL PHYSICS
卷 496, 期 -, 页码 -

出版社

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jcp.2023.112555

关键词

Derivative learning; High-dimensional Jacobians; Neural operators; Parametric surrogates; Parametrized PDEs; Derivative-informed dimension reduction

向作者/读者索取更多资源

Derivative-informed neural operators (DINOs) are a type of neural networks that can approximate operators and their derivatives with high accuracy. They can be used in derivative-based algorithms in various fields, such as Bayesian inverse problems and optimization under parameter uncertainty. By compressing and efficiently utilizing derivative information in neural operator training, DINOs can significantly reduce the costs of data generation and training.
We propose derivative-informed neural operators (DINOs), a general family of neural networks to approximate operators as infinite-dimensional mappings from input function spaces to output function spaces or quantities of interest. After discretizations both inputs and outputs are high-dimensional. We aim to approximate not only the operators with improved accuracy but also their derivatives (Jacobians) with respect to the input function-valued parameter to empower derivative-based algorithms in many applications, e.g., Bayesian inverse problems, optimization under parameter uncertainty, and optimal experimental design. The major difficulties include the computational cost of generating derivative training data and the high dimensionality of the problem leading to large training cost. To address these challenges, we exploit the intrinsic low-dimensionality of the derivatives and develop algorithms for compressing derivative information and efficiently imposing it in neural operator training yielding derivative-informed neural operators. We demonstrate that these advances can significantly reduce the costs of both data generation and training for large classes of problems (e.g., nonlinear steady state parametric PDE maps), making the costs marginal or comparable to the costs without using derivatives, and in particular independent of the discretization dimension of the input and output functions. Moreover, we show that the proposed DINO achieves significantly higher accuracy than neural operators trained without derivative information, for both function approximation and derivative approximation (e.g., Gauss-Newton Hessian), especially when the training data are limited.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据