4.7 Article

ProteinBERT: a universal deep-learning model of protein sequence and function

期刊

BIOINFORMATICS
卷 38, 期 8, 页码 2102-2110

出版社

OXFORD UNIV PRESS
DOI: 10.1093/bioinformatics/btac020

关键词

-

资金

  1. Israel Science Foundation (ISF) [2753/20]

向作者/读者索取更多资源

Self-supervised deep language modeling has achieved unprecedented success with natural language tasks, and the authors introduce a new deep language model called ProteinBERT specifically designed for proteins, which efficiently handles long sequences and achieves near or even better performance than other methods, providing an effective framework for rapid training of protein predictors.
Self-supervised deep language modeling has shown unprecedented success across natural language tasks, and has recently been repurposed to biological sequences. However, existing models and pretraining methods are designed and optimized for text analysis. We introduce ProteinBERT, a deep language model specifically designed for proteins. Our pretraining scheme combines language modeling with a novel task of Gene Ontology (GO) annotation prediction. We introduce novel architectural elements that make the model highly efficient and flexible to long sequences. The architecture of ProteinBERT consists of both local and global representations, allowing end-to-end processing of these types of inputs and outputs. ProteinBERT obtains near state-of-the-art performance, and sometimes exceeds it, on multiple benchmarks covering diverse protein properties (including protein structure, post-translational modifications and biophysical attributes), despite using a far smaller and faster model than competing deep-learning methods. Overall, ProteinBERT provides an efficient framework for rapidly training protein predictors, even with limited labeled data.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据