4.6 Article

Improved Architectures and Training Algorithms for Deep Operator Networks

期刊

JOURNAL OF SCIENTIFIC COMPUTING
卷 92, 期 2, 页码 -

出版社

SPRINGER/PLENUM PUBLISHERS
DOI: 10.1007/s10915-022-01881-0

关键词

Deep learning; Partial differential equations; Computational physics; Physics-informed machine learning

资金

  1. DOE [DE-SC0019116]
  2. AFOSR [FA9550-20-1-0060]
  3. DOE-ARPA grant [DE-AR0001201]
  4. U.S. Department of Energy (DOE) [DE-SC0019116] Funding Source: U.S. Department of Energy (DOE)

向作者/读者索取更多资源

This paper analyzes the training dynamics of deep operator networks (DeepONets) and reveals a bias favoring approximation of functions with larger magnitudes. To correct this bias, an adaptive re-weighting method is proposed, which effectively balances the magnitude of back-propagated gradients during training. A novel network architecture that is more resilient to vanishing gradient problems is also proposed. These developments provide new insights into the training of DeepONets and significantly improve their predictive accuracy, particularly in the challenging setting of learning PDE solution operators.
Operator learning techniques have recently emerged as a powerful tool for learning maps between infinite-dimensional Banach spaces. Trained under appropriate constraints, they can also be effective in learning the solution operator of partial differential equations (PDEs) in an entirely self-supervised manner. In this work we analyze the training dynamics of deep operator networks (DeepONets) through the lens of Neural Tangent Kernel theory, and reveal a bias that favors the approximation of functions with larger magnitudes. To correct this bias we propose to adaptively re-weight the importance of each training example, and demonstrate how this procedure can effectively balance the magnitude of back-propagated gradients during training via gradient descent. We also propose a novel network architecture that is more resilient to vanishing gradient pathologies. Taken together, our developments provide new insights into the training of DeepONets and consistently improve their predictive accuracy by a factor of 10-50x, demonstrated in the challenging setting of learning PDE solution operators in the absence of paired input-output observations. All code and data accompanying this manuscript will be made publicly available at https://github.com/PredicitveIntelligenceLab/ImprovedDeepONets.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据