4.7 Article

A neural network for learning the meaning of objects and words from a featural representation

Journal

NEURAL NETWORKS
Volume 63, Issue -, Pages 234-253

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2014.11.009

Keywords

Semantic memory; Lexical memory; Conceptual representation; Hebb rule; Dominant features; Category formation

Ask authors/readers for more resources

The present work investigates how complex semantics can be extracted from the statistics of input features, using an attractor neural network. The study is focused on how feature dominance and feature distinctiveness can be naturally coded using Hebbian training, and how similarity among objects can be managed. The model includes a lexical network (which represents word-forms) and a semantic network composed of several areas: each area is topologically organized (similarity) and codes for a different feature. Synapses in the model are created using Hebb rules with different values for pre-synaptic and post-synaptic thresholds, producing patterns of asymmetrical synapses. This work uses a simple taxonomy of schematic objects (i.e., a vector of features), with shared features (to realize categories) and distinctive features (to have individual members) with different frequency of occurrence. The trained network can solve simple object recognition tasks and object naming tasks by maintaining a distinction between categories and their members, and providing a different role for dominant features vs. marginal features. Marginal features are not evoked in memory when thinking of objects, but they facilitate the reconstruction of objects when provided as input. Finally, the topological organization of features allows the recognition of objects with some modified features. (C) 2014 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available