4.1 Article

Paradigm Shift in Natural Language Processing

期刊

MACHINE INTELLIGENCE RESEARCH
卷 19, 期 3, 页码 169-183

出版社

SPRINGERNATURE
DOI: 10.1007/s11633-022-1331-6

关键词

Natural language processing; pre-trained language models; deep learning; sequence-to-sequence; paradigm shift

资金

  1. National Natural Science Foundation of China [62022027]

向作者/读者索取更多资源

In recent years, with the development of deep learning, modeling for natural language processing tasks has converged into several mainstream paradigms. However, influenced by the rapid progress of pre-trained language models, paradigm shift has become a trend and achieved success in many tasks. Some paradigms also show potential to unify a large number of NLP tasks.
In the era of deep learning, modeling for most natural language processing (NLP) tasks has converged into several mainstream paradigms. For example, we usually adopt the sequence labeling paradigm to solve a bundle of tasks such as POS-tagging, named entity recognition (NER), and chunking, and adopt the classification paradigm to solve tasks like sentiment analysis. With the rapid progress of pre-trained language models, recent years have witnessed a rising trend of paradigm shift, which is solving one NLP task in a new paradigm by reformulating the task. The paradigm shift has achieved great success on many tasks and is becoming a promising way to improve model performance. Moreover, some of these paradigms have shown great potential to unify a large number of NLP tasks, making it possible to build a single model to handle diverse tasks. In this paper, we review such phenomenon of paradigm shifts in recent years, highlighting several paradigms that have the potential to solve different NLP tasks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.1
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据