4.8 Article

Learning in deep neural networks and brains with similarity-weighted interleaved learning

Publisher

NATL ACAD SCIENCES
DOI: 10.1073/pnas.2115229119

Keywords

complementary learning systems; learning; memory; neural networks; memory consolidation

Funding

  1. Defense Advanced Research Projects Agency [HR0011-18-2-0021]
  2. NIH [R01 NS121764]

Ask authors/readers for more resources

Understanding how the brain learns throughout a lifetime remains a long-standing challenge. This study explores a method of rapidly learning new information by interleaving new and old knowledge, and demonstrates its feasibility in artificial neural networks. Additionally, a theoretical model of implementing this learning approach in the brain is proposed.
Understanding how the brain learns throughout a lifetime remains a long-standing challenge. In artificial neural networks (ANNs), incorporating novel information too rapidly results in catastrophic interference, i.e., abrupt loss of previously acquired knowledge. Complementary Learning Systems Theory (CLST) suggests that new memories can be gradually integrated into the neocortex by interleaving new memories with existing knowledge. This approach, however, has been assumed to require interleaving all existing knowledge every time something new is learned, which is implausible because it is time-consuming and requires a large amount of data. We show that deep, nonlinear ANNs can learn new information by interleaving only a subset of old items that share substantial representational similarity with the new information. By using such similarity-weighted interleaved learning (SWIL), ANNs can learn new information rapidly with a similar accuracy level and minimal interference, while using a much smaller number of old items presented per epoch (fast and data-efficient). SWIL is shown to work with various standard classification datasets (Fashion-MNIST, CIFAR10, and CIFAR100), deep neural network architectures, and in sequential learning frameworks. We show that data efficiency and speedup in learning new items are increased roughly proportionally to the number of nonoverlapping classes stored in the network, which implies an enormous possible speedup in human brains, which encode a high number of separate categories. Finally, we propose a theoretical model of how SWIL might be implemented in the brain.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available