4.6 Article

Language Models: Past, Present, and Future

期刊

COMMUNICATIONS OF THE ACM
卷 65, 期 7, 页码 56-63

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3490443

关键词

-

向作者/读者索取更多资源

Pre-trained language models have shown remarkable advantages in improving NLP task accuracy and serving as universal language processing tools.
NATURAL LANGUAGE PROCESSING (NLP) has undergone revolutionary changes in recent years. Thanks to the development and use of pre-trained language models, remarkable achievements have been made in many applications. Pre-trained language models offer two major advantages. One advantage is that they can significantly boost the accuracy of many NLP tasks. For example, one can exploit the BERT model to achieve performances higher than humans in language understanding.8 One can also leverage the GPT-3 model to generate texts that resemble human writings in language generation.3 A second advantage of pre-trained language models is that they are universal language processing tools. To conduct a machine learning-based task in traditional NLP, one had to label a large amount of data to train a model.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据