4.6 Article

Large Language Models are Not Models of Natural Language: They are Corpus Models

期刊

IEEE ACCESS
卷 10, 期 -, 页码 61970-61979

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3182505

关键词

Grammar; Linguistics; Deep learning; Computational modeling; Syntactics; Production; Task analysis; Natural language processing; deep learning; syntax; linguistics; language model; automatic programming; neural networks

资金

  1. News Angler Project through the Norwegian Research Council [275872]

向作者/读者索取更多资源

Natural Language Processing (NLP) is a prominent application area in the field of Artificial Intelligence. Transfer learning has greatly enhanced the performance of deep learning neural networks trained on language modeling tasks. These language models, when trained with software code data, have shown remarkable capabilities in generating functional computer code from natural language specifications. This challenges the claim that eliminative neural models can replace symbolic abstractions like generative phrase structure grammars, as programming language syntax is determined by phrase structure grammars. Furthermore, the success of neural models in tasks involving symbolic systems proves that language and cognitive systems are indeed symbolic. As a result, the term "language model" is deemed misleading and the adoption of the term "corpus model" is proposed.
Natural Language Processing (NLP) has become one of the leading application areas in the current Artificial Intelligence boom. Transfer learning has enabled large deep learning neural networks trained on the language modeling task to vastly improve performance in almost all downstream language tasks. Interestingly, when the language models are trained with data that includes software code, they demonstrate remarkable abilities in generating functioning computer code from natural language specifications. We argue that this creates a conundrum for the claim that eliminative neural models are a radical restructuring in our understanding of cognition in that they eliminate the need for symbolic abstractions like generative phrase structure grammars. Because the syntax of programming languages is by design determined by phrase structure grammars, neural models that produce syntactic code are apparently uninformative about the theoretical foundations of programming languages. The demonstration that neural models perform well on tasks that involve clearly symbolic systems, proves that they cannot be used as an argument that language and other cognitive systems are not symbolic. Finally, we argue as a corollary that the term language model is misleading and propose the adoption of the working term corpus model instead, which better reflects the genesis and contents of the model.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据