Journal
NEUROCOMPUTING
Volume 71, Issue 7-9, Pages 1172-1179Publisher
ELSEVIER
DOI: 10.1016/j.neucom.2007.11.025
Keywords
recurrent neural network; self-organization; next-word prediction; systematicity
Categories
Ask authors/readers for more resources
As potential candidates for explaining human cognition, connectionist models of sentence processing must demonstrate their ability to behave systematically, generalizing from a small training set. It has recently been shown that simple recurrent networks and, to a greater extent, echo-state networks possess some ability to generalize in artificial language learning tasks. We investigate this capacity for a recently introduced model that consists of separately trained modules: a recursive self-organizing module for learning temporal context representations and a feedforward two-layer perceptron module for next-word prediction. We show that the performance of this architecture is comparable with echo-state networks. Taken together, these results weaken the criticism of connectionist approaches, showing that various general recursive connectionist architectures share the potential of behaving systematically. (c) 2008 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available