4.6 Article

A Bio-Inspired Incremental Learning Architecture for Applied Perceptual Problems

Journal

COGNITIVE COMPUTATION
Volume 8, Issue 5, Pages 924-934

Publisher

SPRINGER
DOI: 10.1007/s12559-016-9389-5

Keywords

Perceptual learning; Self-organization; Incremental learning; Biological modeling

Ask authors/readers for more resources

We present a biologically inspired architecture for incremental learning that remains resource-efficient even in the face of very high data dimensionalities (> 1000) that are typically associated with perceptual problems. In particular, we investigate how a new perceptual (object) class can be added to a trained architecture without retraining, while avoiding the well-known catastrophic forgetting effects typically associated with such scenarios. At the heart of the presented architecture lies a generative description of the perceptual space by a self-organized approach which at the same time approximates the neighborhood relations in this space on a two-dimensional plane. This approximation, which closely imitates the topographic organization of the visual cortex, allows an efficient local update rule for incremental learning even in the face of very high dimensionalities, which we demonstrate by tests on the well-known MNIST benchmark. We complement the model by adding a biologically plausible short-term memory system, allowing it to retain excellent classification accuracy even under incremental learning in progress. The short-term memory is additionally used to reinforce new data statistics by replaying previously stored samples during dedicated sleep phases.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available