4.7 Article

Recurrent Neural Networks (RNNs) with dimensionality reduction and break down in computational mechanics; application to multi-scale localization step

出版社

ELSEVIER SCIENCE SA
DOI: 10.1016/j.cma.2021.114476

关键词

Recurrent neural networks; Multi-scale; Dimensionality reduction; Localization step; History-dependence; High dimensionality

资金

  1. European Union [862015]

向作者/读者索取更多资源

This study develops Recurrent Neural Networks (RNNs) as surrogate models of RVE response while preserving the evolution of local micro-structure state variables. Several surrogate models based on dimensionality reduction are proposed and compared, and the training strategy is optimized to enhance GPU usage. Additionally, the connection between physical state variables and hidden variables of RNN is revealed and utilized in selecting hyperparameters for RNN-based surrogate models at the design stage.
Artificial Neural Networks (NNWs) are appealing functions to substitute high dimensional and non-linear history-dependent problems in computational mechanics since they offer the possibility to drastically reduce the computational time. This feature has recently been exploited in the context of multi-scale simulations, in which the NNWs serve as surrogate model of microscale finite element resolutions. Nevertheless, in the literature, mainly the macro-stress-macro-strain response of the meso-scale boundary value problem was considered and the micro-structure information could not be recovered in a so-called localization step. In this work, we develop Recurrent Neural Networks (RNNs) as surrogates of the RVE response while being able to recover the evolution of the local micro-structure state variables for complex loading scenarios. The main difficulty is the high dimensionality of the RNNs output which consists in the internal state variable distribution in the micro-structure. We thus propose and compare several surrogate models based on a dimensionality reduction: (i) direct RNN modeling with implicit NNW dimensionality reduction, (ii) RNN with PCA dimensionality reduction, and (iii) RNN with PCA dimensionality reduction and dimensionality break down, i.e. the use of several RNNs instead of a single one. Besides, we optimize the sequential training strategy of the latter surrogate for GPU usage in order to speed up the process. Finally, through RNN modeling of the principal components coefficients, the connection between the physical state variables and the hidden variables of the RNN is revealed, and exploited in order to select the hyper-parameters of the RNN-based surrogate models in their design stage. (C) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据