4.7 Article

Learning high-dimensional parametric maps via reduced basis adaptive residual networks

出版社

ELSEVIER SCIENCE SA
DOI: 10.1016/j.cma.2022.115730

关键词

Deep learning; Neural networks; Parametrized PDEs; Control flows; Residual networks; Adaptive surrogate construction

资金

  1. U.S. Department of Energy under ARPA-E award [DE-AR0001208]
  2. ASCR awards [DE-SC0019303, DE-SC0021239]
  3. U.S. Department of Defense under MURI award [FA9550-21-1-0084]
  4. U.S. Department of Energy (DOE) [DE-SC0021239] Funding Source: U.S. Department of Energy (DOE)

向作者/读者索取更多资源

This article proposes a scalable framework for learning high-dimensional parametric maps using adaptively constructed residual network (ResNet) maps. By linearly restricting high-dimensional maps to informed reduced bases of the inputs, the ill-posedness of the neural network training problem can be ameliorated. Inspired by recent approximation theory, the article presents an adaptive ResNet construction algorithm that allows for depth-wise enrichment of the neural network approximation. The proposed methodology achieves remarkably high accuracy for limited training data.
We propose a scalable framework for the learning of high-dimensional parametric maps via adaptively constructed residual network (ResNet) maps between reduced bases of the inputs and outputs. When just few training data are available, it is beneficial to have a compact parametrization in order to ameliorate the ill-posedness of the neural network training problem. By linearly restricting high-dimensional maps to informed reduced bases of the inputs, one can compress high-dimensional maps in a constructive way that can be used to detect appropriate basis ranks, equipped with rigorous error estimates. A scalable neural network learning framework is thus to learn the nonlinear compressed reduced basis mapping. Unlike the reduced basis construction, however, neural network constructions are not guaranteed to reduce errors by adding representation power, making it difficult to achieve good practical performance. Inspired by recent approximation theory that connects ResNets to sequential minimizing flows, we present an adaptive ResNet construction algorithm. This algorithm allows for depth-wise enrichment of the neural network approximation, in a manner that can achieve good practical performance by first training a shallow network and then adapting. We prove universal approximation of the associated neural network class for L2 nu functions on compact sets. Our overall framework allows for constructive means to detect appropriate breadth and depth, and related compact parametrizations of neural networks, significantly reducing the need for architectural hyperparameter tuning. Numerical experiments for parametric PDE problems and a 3D CFD wing design optimization parametric map demonstrate that the proposed methodology can achieve remarkably high accuracy for limited training data, and outperformed other neural network strategies we compared against.(c) 2022 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据