4.6 Article

Audacity of huge: overcoming challenges of data scarcity and data quality for machine learning in computational materials discovery

期刊

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.coche.2021.100778

关键词

-

资金

  1. National Science Foundation [CBET-1704266, CBET-1846426]
  2. United States Department of Energy [DE-SC0012702, DE-SC0018096, DE-SC0019112, DE-NA0003965]
  3. DARPA [D18AP00039]
  4. Office of Naval Research [N00014-17-1-2956, N00014-18-1-2434, N00014-20-1-2150]
  5. National Science Foundation Graduate Research Fellowship [1122374]
  6. AAAS Marion Milligan Mason Award
  7. Alfred P. Sloan Fellowship in Chemistry
  8. U.S. Department of Energy (DOE) [DE-SC0019112] Funding Source: U.S. Department of Energy (DOE)

向作者/读者索取更多资源

Machine learning-accelerated discovery requires large amounts of high-fidelity data. However, the challenging nature and high cost of data generation in materials discovery have resulted in scarce and dubious quality data. Data-driven techniques such as using consensus across functionals, developing new theories, and detecting areas where computationally demanding methods are necessary are helping overcome these limitations. When properties cannot be reliably simulated, large experimental data sets can be used to train ML models. Moreover, advancements in natural language processing and automated image analysis enable the learning of structure-property relationships from literature.
Machine learning (ML)-accelerated discovery requires large amounts of high-fidelity data to reveal predictive structure- property relationships. For many properties of interest in materials discovery, the challenging nature and high cost of data generation has resulted in a data landscape that is both scarcely populated and of dubious quality. Data-driven techniques starting to overcome these limitations include the use of consensus across functionals in density functional theory, the development of new functionals or accelerated electronic structure theories, and the detection of where computationally demanding methods are most necessary. When properties cannot be reliably simulated, large experimental data sets can be used to train ML models. In the absence of manual curation, increasingly sophisticated natural language processing and automated image analysis are making it possible to learn structure-property relationships from the literature. Models trained on these data sets will improve as they incorporate community feedback.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据