4.7 Article

Recurrent Neural Networks (RNNs) with dimensionality reduction and break down in computational mechanics; application to multi-scale localization step

Journal

Publisher

ELSEVIER SCIENCE SA
DOI: 10.1016/j.cma.2021.114476

Keywords

Recurrent neural networks; Multi-scale; Dimensionality reduction; Localization step; History-dependence; High dimensionality

Funding

  1. European Union [862015]

Ask authors/readers for more resources

This study develops Recurrent Neural Networks (RNNs) as surrogate models of RVE response while preserving the evolution of local micro-structure state variables. Several surrogate models based on dimensionality reduction are proposed and compared, and the training strategy is optimized to enhance GPU usage. Additionally, the connection between physical state variables and hidden variables of RNN is revealed and utilized in selecting hyperparameters for RNN-based surrogate models at the design stage.
Artificial Neural Networks (NNWs) are appealing functions to substitute high dimensional and non-linear history-dependent problems in computational mechanics since they offer the possibility to drastically reduce the computational time. This feature has recently been exploited in the context of multi-scale simulations, in which the NNWs serve as surrogate model of microscale finite element resolutions. Nevertheless, in the literature, mainly the macro-stress-macro-strain response of the meso-scale boundary value problem was considered and the micro-structure information could not be recovered in a so-called localization step. In this work, we develop Recurrent Neural Networks (RNNs) as surrogates of the RVE response while being able to recover the evolution of the local micro-structure state variables for complex loading scenarios. The main difficulty is the high dimensionality of the RNNs output which consists in the internal state variable distribution in the micro-structure. We thus propose and compare several surrogate models based on a dimensionality reduction: (i) direct RNN modeling with implicit NNW dimensionality reduction, (ii) RNN with PCA dimensionality reduction, and (iii) RNN with PCA dimensionality reduction and dimensionality break down, i.e. the use of several RNNs instead of a single one. Besides, we optimize the sequential training strategy of the latter surrogate for GPU usage in order to speed up the process. Finally, through RNN modeling of the principal components coefficients, the connection between the physical state variables and the hidden variables of the RNN is revealed, and exploited in order to select the hyper-parameters of the RNN-based surrogate models in their design stage. (C) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available