期刊
COGNITION
卷 125, 期 2, 页码 288-308出版社
ELSEVIER
DOI: 10.1016/j.cognition.2012.06.006
关键词
Sentence generation; Neural networks; Language acquisition; Embodied cognition
In this article we present a neural network model of sentence generation. The network has both technical and conceptual innovations. Its main technical novelty is in its semantic representations: the messages which form the input to the network are structured as sequences, so that message elements are delivered to the network one at a time. Rather than learning to linearise a static semantic representation as a sequence of words, our network rehearses a sequence of semantic signals, and learns to generate words from selected signals. Conceptually, the network's use of rehearsed sequences of semantic signals is motivated by work in embodied cognition, which posits that the structure of semantic representations has its origin in the serial structure of sensorimotor processing. The rich sequential structure of the network's semantic inputs also allows it to incorporate certain Chomskyan ideas about innate syntactic knowledge and parameter-setting, as well as a more empiricist account of the acquisition of idiomatic syntactic constructions. (c) 2012 Elsevier B.V. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据