4.1 Article

Quantifying the advantage of domain-specific pre-training on named entity recognition tasks in materials science

期刊

PATTERNS
卷 3, 期 4, 页码 -

出版社

CELL PRESS
DOI: 10.1016/j.patter.2022.100488

关键词

-

资金

  1. Toyota Research Institute through the Accelerated Materials Design and Discovery program
  2. U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division [DE-AC02-05CH11231]
  3. National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility [DE-AC02-05CH11231]

向作者/读者索取更多资源

Efficiently connecting new materials discoveries to established literature can be achieved by using named entity recognition (NER) to extract structured summary-level data from unstructured materials science text. In this study, the performance of four NER models on materials science datasets is compared, and it is found that domain-specific pre-training provides measurable advantages, with the bidirectional long short-term memory (BiLSTM) model consistently outperforming BERT.
A bottleneck in efficiently connecting new materials discoveries to established literature has arisen due to an increase in publications. This problem may be addressed by using named entity recognition (NER) to extract structured summary-level data from unstructured materials science text. We compare the performance of four NER models on three materials science datasets. The four models include a bidirectional long shortterm memory (BiLSTM) and three transformer models (BERT, SciBERT, and MatBERT) with increasing degrees of domain-specific materials science pre-training. MatBERT improves over the other two BERTBASE-based models by 1%similar to 12%, implying that domain-specific pre-training provides measurable advantages. Despite relative architectural simplicity, the BiLSTM model consistently outperforms BERT, perhaps due to its domain-specific pre-trained word embeddings. Furthermore, MatBERT and SciBERT models outperform the original BERT model to a greater extent in the small data limit. MatBERT's higher-quality predictions should accelerate the extraction of structured data from materials science literature.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.1
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据