4.8 Article

A reinforcement-based mechanism for discontinuous learning

出版社

NATL ACAD SCIENCES
DOI: 10.1073/pnas.2215352119

关键词

reinforcement learning; physics of behavior; foraging; navigation

资金

  1. NSF-Simons Center for Mathematical & Statistical Analysis of Biology at Harvard [1764269]
  2. Harvard Quantitative Biology Initiative

向作者/读者索取更多资源

Recent experiments with mice navigating a labyrinth have shown a sharp discontinuity in learning, contradicting the gradual nature of reinforcement learning. By combining biologically plausible reinforcement learning rules with persistent exploration, discontinuous learning is shown to be a common occurrence.
Problem-solving and reasoning involve mental exploration and navigation in sparse relational spaces. A physical analogue is spatial navigation in structured environments such as a network of burrows. Recent experiments with mice navigating a labyrinth show a sharp discontinuity during learning, corresponding to a distinct moment of sudden insight when mice figure out long, direct paths to the goal. This discontinuity is seemingly at odds with reinforcement learning (RL), which involves a gradual build-up of a value signal during learning. Here, we show that biologically plausible RL rules combined with persistent exploration generically exhibit discontinuous learning. In tree-like structured environments, positive feedback from learning on behavior generates a reinforcement wave with a steep profile. The discontinuity occurs when the wave reaches the starting point. By examining the nonlinear dynamics of reinforcement propagation, we establish a quantitative relationship between the learning rule, the agent's exploration biases, and learning speed. Predictions explain existing data and motivate specific experiments to isolate the phenomenon. Additionally, we characterize the exact learning dynamics of various RL rules for a complex sequential task.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据