4.5 Review

Pretraining model for biological sequence data

Journal

BRIEFINGS IN FUNCTIONAL GENOMICS
Volume 20, Issue 3, Pages 181-195

Publisher

OXFORD UNIV PRESS
DOI: 10.1093/bfgp/elab025

Keywords

biological sequence; pretraining model; deep learning

Funding

  1. National Natural Science Foundation of China [61872309, 61972138, 62002111]
  2. Fundamental Research Funds for the Central Universities [531118010355]
  3. China Postdoctoral Science Foundation [2019 M662770]
  4. Hunan Provincial Natural Science Foundation of China [2020JJ4215]
  5. Key Research and Development Program of Changsha [kq2004016]
  6. Changsha Municipal Natural Science Foundation [kq2014058]

Ask authors/readers for more resources

This article provides a comprehensive review of pretraining models for biological sequence data, introducing biological sequences and datasets, summarizing popular pretraining models based on four categories, discussing the role of pretraining models in downstream tasks, proposing a novel pretraining scheme for protein sequences and a multitask benchmark, and addressing challenges and future directions in pretraining models for biological sequences.
With the development of high-throughput sequencing technology, biological sequence data ref lecting life information becomes increasingly accessible. Particularly on the background of the COVID-19 pandemic, biological sequence data play an important role in detecting diseases, analyzing the mechanism and discovering specific drugs. In recent years, pretraining models that have emerged in natural language processing have attracted widespread attention in many research fields not only to decrease training cost but also to improve performance on downstream tasks. Pretraining models are used for embedding biological sequence and extracting feature from large biological sequence corpus to comprehensively understand the biological sequence data. In this survey, we provide a broad review on pretraining models for biological sequence data. Moreover, we first introduce biological sequences and corresponding datasets, including brief description and accessible link. Subsequently, we systematically summarize popular pretraining models for biological sequences based on four categories: CNN, word2vec, LSTM and Transformer. Then, we present some applications with proposed pretraining models on downstream tasks to explain the role of pretraining models. Next, we provide a novel pretraining scheme for protein sequences and a multitask benchmark for protein pretraining models. Finally, we discuss the challenges and future directions in pretraining models for biological sequences.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available