4.7 Article

LAutomated Chemical Reaction Extraction from Scientific Literature

Journal

JOURNAL OF CHEMICAL INFORMATION AND MODELING
Volume 62, Issue 9, Pages 2035-2045

Publisher

AMER CHEMICAL SOC
DOI: 10.1021/acs.jcim.1c00284

Keywords

-

Funding

  1. DARPA Accelerated Molecular Discovery (AMD) program [HR00111920025]
  2. Machine Learning for Pharmaceutical Discovery and Synthesis Consortium (MLPDS)
  3. Defence Threat Reduction Agency [HDTRA12110013]

Ask authors/readers for more resources

Access to structured chemical reaction data is crucial for chemists in bench experiments and applications like computer-aided drug design. This study focuses on developing automated methods for extracting reactions from chemical literature. Two-stage deep learning models based on Transformer are utilized, achieving high performance and data efficiency with only hundreds of annotated reactions.
Access to structured chemical reaction data is of key importance for chemists in performing bench experiments and in modern applications like computer-aided drug design. Existing reaction databases are generally populated by human curators through manual abstraction from published literature (e.g., patents and journals), which is time consuming and labor intensive, especially with the exponential growth of chemical literature in recent years. In this study, we focus on developing automated methods for extracting reactions from chemical literature. We consider journal publications as the target source of information, which are more comprehensive and better represent the latest developments in chemistry compared to patents; however, they are less formulaic in their descriptions of reactions. To implement the reaction extraction system, we first devised a chemical reaction schema, primarily including a central product, and a set of associated reaction roles such as reactants, catalyst, solvent, and so on. We formulate the task as a structure prediction problem and solve it with a two-stage deep learning framework consisting of product extraction and reaction role labeling. Both models are built upon Transformer-based encoders, which are adaptively pretrained using domain and task-relevant unlabeled data. Our models are shown to be both effective and data efficient, achieving an F1 score of 76.2% in product extraction and 78.7% in role extraction, with only hundreds of annotated reactions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available