4.7 Article

Working Memory Load Strengthens Reward Prediction Errors

期刊

JOURNAL OF NEUROSCIENCE
卷 37, 期 16, 页码 4332-4342

出版社

SOC NEUROSCIENCE
DOI: 10.1523/JNEUROSCI.2700-16.2017

关键词

fMRI; reinforcement learning; reward prediction error; working memory

资金

  1. National Institutes of Health [NS065046, MH099078, MH080066-01]
  2. James S. McDonnell Foundation
  3. Office of Naval research [MURI N00014-16-1-2832]
  4. National Science Foundation [1460604]
  5. Division Of Behavioral and Cognitive Sci
  6. Direct For Social, Behav & Economic Scie [1460604] Funding Source: National Science Foundation

向作者/读者索取更多资源

Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process in which reward prediction errors (RPEs) are used to update expected values of choice options. This modeling ignores the different contributions of different memory and decision-making systems thought to contribute even to simple learning. In an fMRI experiment, we investigated how working memory (WM) and incremental RL processes interact to guide human learning. WMload was manipulated by varying the number of stimuli to be learned across blocks. Behavioral results and computational modeling confirmed that learning was best explained as a mixture of two mechanisms: a fast, capacity-limited, and delay-sensitive WM process together with slower RL. Model-based analysis of fMRI data showed that striatum and lateral prefrontal cortex were sensitive to RPE, as shown previously, but, critically, these signals were reduced when the learning problem was within capacity ofWM. The degree of this neural interaction related to individual differences in the use of WM to guide behavioral learning. These results indicate that the two systems do not process information independently, but rather interact during learning.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据