4.7 Article

MG-BERT: leveraging unsupervised atomic representation learning for molecular property prediction

Journal

BRIEFINGS IN BIOINFORMATICS
Volume 22, Issue 6, Pages -

Publisher

OXFORD UNIV PRESS
DOI: 10.1093/bib/bbab152

Keywords

molecular property prediction; molecular graph BERT; atomic representation; deep learning; self-supervised learning

Funding

  1. Changsha Municipal Natural Science Foundation [kq2014144]
  2. Changsha Science and Technology Bureau project [kq2001034]
  3. National Key Research & Development project by the Ministry of Science and Technology of China [2018YFB1003203]
  4. State Key Laboratory of High-Performance Computing [201901-11]
  5. National Science Foundation of China [U1811462]

Ask authors/readers for more resources

This study introduces a molecular graph BERT (MG-BERT) model that integrates graph neural network mechanisms and utilizes a self-supervised learning strategy for pretraining, enhancing the model's contextual sensitivity and achieving outstanding performance in molecular property prediction.
Motivation: Accurate and efficient prediction of molecular properties is one of the fundamental issues in drug design and discovery pipelines. Traditional feature engineering-based approaches require extensive expertise in the feature design and selection process. With the development of artificial intelligence (AI) technologies, data-driven methods exhibit unparalleled advantages over the feature engineering-based methods in various domains. Nevertheless, when applied to molecular property prediction, AI models usually suffer from the scarcity of labeled data and show poor generalization ability. Results: In this study, we proposed molecular graph BERT (MG-BERT), which integrates the local message passing mechanism of graph neural networks (GNNs) into the powerful BERT model to facilitate learning from molecular graphs. Furthermore, an effective self-supervised learning strategy named masked atoms prediction was proposed to pretrain the MG-BERT model on a large amount of unlabeled data to mine context information in molecules. We found the MG-BERT model can generate context-sensitive atomic representations after pretraining and transfer the learned knowledge to the prediction of a variety of molecular properties. The experimental results show that the pretrained MG-BERT model with a little extra fine-tuning can consistently outperform the state-of-the-art methods on all 11 ADMET datasets. Moreover, the MG-BERT model leverages attention mechanisms to focus on atomic features essential to the target property, providing excellent interpretability for the trained model. The MG-BERT model does not require any hand-crafted feature as input and is more reliable due to its excellent interpretability, providing a novel framework to develop state-of-the-art models for a wide range of drug discovery tasks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available