4.7 Article

Brains and algorithms partially converge in natural language processing

期刊

COMMUNICATIONS BIOLOGY
卷 5, 期 1, 页码 -

出版社

NATURE PORTFOLIO
DOI: 10.1038/s42003-022-03036-1

关键词

-

资金

  1. Fyssen Foundation [ANR-17-EURE-0017]
  2. Bettencourt and Fyssen Foundations

向作者/读者索取更多资源

This study examines the similarity between deep language models and the human brain responses by comparing different neural network models trained on word prediction tasks. The researchers find that the similarity primarily depends on the models' ability to predict words from context, and it reveals the formation and maintenance of perceptual, lexical, and compositional representations within each cortical region.
Charlotte Caucheteux and Jean-Remi King examine the ability of transformer neural networks trained on word prediction tasks to fit representations in the human brain measured with fMRI and MEG. Their results provide further insight into the workings of transformer language models and their relevance to brain responses. Deep learning algorithms trained to predict masked words from large amount of text have recently been shown to generate activations similar to those of the human brain. However, what drives this similarity remains currently unknown. Here, we systematically compare a variety of deep language models to identify the computational principles that lead them to generate brain-like representations of sentences. Specifically, we analyze the brain responses to 400 isolated sentences in a large cohort of 102 subjects, each recorded for two hours with functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). We then test where and when each of these algorithms maps onto the brain responses. Finally, we estimate how the architecture, training, and performance of these models independently account for the generation of brain-like representations. Our analyses reveal two main findings. First, the similarity between the algorithms and the brain primarily depends on their ability to predict words from context. Second, this similarity reveals the rise and maintenance of perceptual, lexical, and compositional representations within each cortical region. Overall, this study shows that modern language algorithms partially converge towards brain-like solutions, and thus delineates a promising path to unravel the foundations of natural language processing.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据