4.7 Article

Deep Artificial Neural Networks Reveal a Distributed Cortical Network Encoding Propositional Sentence-Level Meaning

期刊

JOURNAL OF NEUROSCIENCE
卷 41, 期 18, 页码 4100-4119

出版社

SOC NEUROSCIENCE
DOI: 10.1523/JNEUROSCI.1152-20.2021

关键词

distributional semantics; fMRI; lexical semantics; sentence comprehension; voxelwise encoding; word embedding

资金

  1. University of Rochester Medical Center Schmitt Program on Integrative Neuroscience award
  2. Intelligence Advanced Research Projects Activity (IARPA) via the Air Force Research Laboratory [FA8650-14-C-7357]
  3. NSF CAREER award [1652127]
  4. Division Of Behavioral and Cognitive Sci
  5. Direct For Social, Behav & Economic Scie [1652127] Funding Source: National Science Foundation

向作者/读者索取更多资源

Recent studies have used vector models of word meaning derived from patterns of word co-occurrence in text corpora to explain brain activation elicited by sentences, mapping out semantic representation across a distributed brain network spanning temporal, parietal, and frontal cortex. However, it remains unclear whether activation patterns within regions reflect unified representations of sentence-level meaning. To address this issue, a recurrent deep artificial neural network was used to encode sentences and predict fMRI activation elements, showing that propositional sentence-level meaning is represented within and across multiple cortical regions.
Understanding how and where in the brain sentence-level meaning is constructed from words presents a major scientific challenge. Recent advances have begun to explain brain activation elicited by sentences using vector models of word meaning derived from patterns of word co-occurrence in text corpora. These studies have helped map out semantic representation across a distributed brain network spanning temporal, parietal, and frontal cortex. However, it remains unclear whether activation patterns within regions reflect unified representations of sentence-level meaning, as opposed to superpositions of context-independent component words. This is because models have typically represented sentences as bags-of-words that neglect sentence-level structure. To address this issue, we interrogated NERI activation elicited as 240 sentences were read by 14 participants (9 female, 5 male), using sentences encoded by a recurrent deep artificial neural-network trained on a sentence inference task (InferSent). Recurrent connections and nonlinear filters enable InferSent to transform sequences of word vectors into unified propositional sentence representations suitable for evaluating intersentence entailment relations. Using voxelwise encoding modeling, we demonstrate that InferSent predicts elements of fMRI activation that cannot be predicted by bag-of-words models and sentence models using grammatical rules to assemble word vectors. This effect occurs throughout a distributed network, which suggests that propositional sentence-level meaning is represented within and across multiple cortical regions rather than at any single site. In follow-up analyses, we place results in the context of other deep network approaches (ELMo and BERT) and estimate the degree of unpredicted neural signal using an experiential semantic model and cross-participant encoding.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据