4.7 Review

Pre-trained models for natural language processing: A survey

Journal

SCIENCE CHINA-TECHNOLOGICAL SCIENCES
Volume 63, Issue 10, Pages 1872-1897

Publisher

SCIENCE PRESS
DOI: 10.1007/s11431-020-1647-3

Keywords

deep learning; neural network; natural language processing; pre-trained model; distributed representation; word embedding; self-supervised learning; language modelling

Funding

  1. National Natural Science Foundation of China [61751201, 61672162]
  2. Shanghai Municipal Science and Technology Major Project [2018SHZDZX01]
  3. ZJLab

Ask authors/readers for more resources

Recently, the emergence of pre-trained models (PTMs) has brought natural language processing (NLP) to a new era. In this survey, we provide a comprehensive review of PTMs for NLP. We first briefly introduce language representation learning and its research progress. Then we systematically categorize existing PTMs based on a taxonomy from four different perspectives. Next, we describe how to adapt the knowledge of PTMs to downstream tasks. Finally, we outline some potential directions of PTMs for future research. This survey is purposed to be a hands-on guide for understanding, using, and developing PTMs for various NLP tasks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available