4.8 Article

Overcoming catastrophic forgetting in neural networks

出版社

NATL ACAD SCIENCES
DOI: 10.1073/pnas.1611835114

关键词

synaptic consolidation; artificial intelligence; stability plasticity; continual learning; deep learning

资金

  1. Wellcome Trust
  2. Engineering and Physical Sciences Research Council
  3. Google Faculty Award
  4. Engineering and Physical Sciences Research Council [EP/M019780/1] Funding Source: researchfish
  5. EPSRC [EP/M019780/1] Funding Source: UKRI

向作者/读者索取更多资源

The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据