4.7 Article

Simple computational strategies for more effective physics-informed neural networks modeling of turbulent natural convection

Journal

JOURNAL OF COMPUTATIONAL PHYSICS
Volume 456, Issue -, Pages -

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jcp.2022.111022

Keywords

Deep learning; Machine learning; PINNs; DNS; Turbulence; Convection

Ask authors/readers for more resources

PINNs show promise as candidates for full fluid flow PDE modeling, but challenges remain in sustaining turbulence. By minimizing composite loss functions, surrogate modeling using PINNs for turbulent natural convection flows can reduce the need for large training datasets.
The high expressivity and agility of physics-informed neural networks (PINNs) make them promising candidates for full fluid flow PDE modeling. An important question is whether this new paradigm, exempt from the traditional notion of discretization of the underlying operators very much connected to the flow scales resolution, is capable of sustaining high levels of turbulence. Another concern is whether it can be used as numerical substitutes to full DNS data retrieval and storage; DNS remaining so far the standard tool for validation and inter-comparison with experimental results. We explore the use of PINNs surrogate modeling for turbulent natural convection flows, mainly relying on DNS temperature data from the fluid bulk and velocity data at some fluid boundaries. This technique depends on the minimization of a composite loss-function relying on labels and PDE residuals. We demonstrate the large computational requirements under which PINNs are capable of accurately recovering the flow hidden quantities. We then propose new techniques to mitigate the need for large training datasets. First, we propose a padding technique to better distribute some of the scattered coordinates at which PDE residuals are minimized, in particular in zones where no labels are available. We show how it comes to play as a regularization close to the training boundaries and results in a noticeable global accuracy improvement at iso-budget. We then propose a relaxation of the incompressibility condition involved in the loss function contribution related to the PDE residuals. This development drastically benefits the optimization search and results in a much improved convergence. The results obtained for Rayleigh-Benard flow at Ra = 2 . 10(9) are particularly impressive. With training data amounting for only 0.32% of the stored DNS dataset, the predictive accuracy of the surrogate over the entire half a billion DNS coordinates yields errors for all flow variables ranging between [0.3% - 4%] in the relative L-2 norm. (C) 2022 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available