4.6 Article

Audacity of huge: overcoming challenges of data scarcity and data quality for machine learning in computational materials discovery

Journal

CURRENT OPINION IN CHEMICAL ENGINEERING
Volume 36, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.coche.2021.100778

Keywords

-

Funding

  1. National Science Foundation [CBET-1704266, CBET-1846426]
  2. United States Department of Energy [DE-SC0012702, DE-SC0018096, DE-SC0019112, DE-NA0003965]
  3. DARPA [D18AP00039]
  4. Office of Naval Research [N00014-17-1-2956, N00014-18-1-2434, N00014-20-1-2150]
  5. National Science Foundation Graduate Research Fellowship [1122374]
  6. AAAS Marion Milligan Mason Award
  7. Alfred P. Sloan Fellowship in Chemistry
  8. U.S. Department of Energy (DOE) [DE-SC0019112] Funding Source: U.S. Department of Energy (DOE)

Ask authors/readers for more resources

Machine learning-accelerated discovery requires large amounts of high-fidelity data. However, the challenging nature and high cost of data generation in materials discovery have resulted in scarce and dubious quality data. Data-driven techniques such as using consensus across functionals, developing new theories, and detecting areas where computationally demanding methods are necessary are helping overcome these limitations. When properties cannot be reliably simulated, large experimental data sets can be used to train ML models. Moreover, advancements in natural language processing and automated image analysis enable the learning of structure-property relationships from literature.
Machine learning (ML)-accelerated discovery requires large amounts of high-fidelity data to reveal predictive structure- property relationships. For many properties of interest in materials discovery, the challenging nature and high cost of data generation has resulted in a data landscape that is both scarcely populated and of dubious quality. Data-driven techniques starting to overcome these limitations include the use of consensus across functionals in density functional theory, the development of new functionals or accelerated electronic structure theories, and the detection of where computationally demanding methods are most necessary. When properties cannot be reliably simulated, large experimental data sets can be used to train ML models. In the absence of manual curation, increasingly sophisticated natural language processing and automated image analysis are making it possible to learn structure-property relationships from the literature. Models trained on these data sets will improve as they incorporate community feedback.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available