Journal
ENGINEERING
Volume 25, Issue -, Pages 51-65Publisher
ELSEVIER
DOI: 10.1016/j.eng.2022.04.024
Keywords
Pre-trained models; Natural language processing
Categories
Ask authors/readers for more resources
This article provides a comprehensive review of representative work and recent progress in the field of NLP and introduces the taxonomy of pre-trained models. It also discusses the impact and challenges of pre-trained models in NLP and addresses future research directions.
Pre-trained language models have achieved striking success in natural language processing (NLP), leading to a paradigm shift from supervised learning to pre-training followed by fine-tuning. The NLP community has witnessed a surge of research interest in improving pre-trained models. This article presents a com-prehensive review of representative work and recent progress in the NLP field and introduces the taxon-omy of pre-trained models. We first give a brief introduction of pre-trained models, followed by characteristic methods and frameworks. We then introduce and analyze the impact and challenges of pre-trained models and their downstream applications. Finally, we briefly conclude and address future research directions in this field.(c) 2022 THE AUTHORS. Published by Elsevier LTD on behalf of Chinese Academy of Engineering and Higher Education Press Limited Company. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available