4.8 Article

Deep learning framework for material design space exploration using active transfer learning and data augmentation

期刊

NPJ COMPUTATIONAL MATERIALS
卷 7, 期 1, 页码 -

出版社

NATURE PORTFOLIO
DOI: 10.1038/s41524-021-00609-2

关键词

-

资金

  1. National Research Foundation of Korea (NRF) - Ministry of Science and ICT (MSIT) of the Republic of Korea [2019R1A2C4070690, 2016M3D1A1900038]
  2. KAIST [N11190118]
  3. 3M and Inc.
  4. National Research Foundation of Korea [2016M3D1A1900035] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

向作者/读者索取更多资源

In this study, a deep neural network-based forward design approach is proposed to efficiently search for superior materials beyond the domain of the initial training set by gradually updating the neural network with active transfer learning and data augmentation methods. This approach compensates for the weak predictive power of neural networks on unseen domains.
Neural network-based generative models have been actively investigated as an inverse design method for finding novel materials in a vast design space. However, the applicability of conventional generative models is limited because they cannot access data outside the range of training sets. Advanced generative models that were devised to overcome the limitation also suffer from the weak predictive power on the unseen domain. In this study, we propose a deep neural network-based forward design approach that enables an efficient search for superior materials far beyond the domain of the initial training set. This approach compensates for the weak predictive power of neural networks on an unseen domain through gradual updates of the neural network with active transfer learning and data augmentation methods. We demonstrate the potential of our framework with a grid composite optimization problem that has an astronomical number of possible design configurations. Results show that our proposed framework can provide excellent designs close to the global optima, even with the addition of a very small dataset corresponding to less than 0.5% of the initial training dataset size.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据