4.7 Article

DPM: A deep learning PDE augmentation method with application to large-eddy simulation

期刊

JOURNAL OF COMPUTATIONAL PHYSICS
卷 423, 期 -, 页码 -

出版社

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jcp.2020.109811

关键词

Deep learning; Scientific machine learning; Large-eddy simulation; Sub-grid-scale modeling; Turbulence simulation

资金

  1. Department of Energy, National Nuclear Security Administration [DE-NA0002374]
  2. National Science Foundation [OCI-0725070, ACI-1238993]
  3. State of Illinois

向作者/读者索取更多资源

A framework is introduced that leverages known physics to reduce overfitting in machine learning for scientific applications. The partial differential equation (PDE) that expresses the physics is augmented with a neural network that uses available data to learn a description of the corresponding unknown or unrepresented physics. Training within this combined system corrects for missing, unknown, or erroneously represented physics, including discretization errors associated with the PDE's numerical solution. For optimization of the network within the PDE, an adjoint PDE is solved to provide high-dimensional gradients, and a stochastic adjoint method (SAM) further accelerates training. The approach is demonstrated for large-eddy simulation (LES) of turbulence. High-fidelity direct numerical simulations (DNS) of decaying isotropic turbulence provide the training data used to learn sub-filter-scale closures for the filtered Navier-Stokes equations. Out-of-sample comparisons show that the deep learning PDE method outperforms widely-used models, even for filter sizes so large that they become qualitatively incorrect. It also significantly outperforms the same neural network when a priori trained based on simple data mismatch, not accounting for the full PDE. Measures of discretization errors, which are well-known to be consequential in LES, point to the importance of the unified training formulation's design, which without modification corrects for them. For comparable accuracy, simulation runtime is significantly reduced. A relaxation of the typical discrete enforcement of the divergence-free constraint in the solver is also successful, instead allowing the DPM to approximately enforce incompressibility physics. Since the training loss function is not restricted to correspond directly to the closure to be learned, training can incorporate diverse data, including experimental data. (C) 2020 Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据