期刊
JOURNAL OF NEUROSCIENCE
卷 37, 期 16, 页码 4332-4342出版社
SOC NEUROSCIENCE
DOI: 10.1523/JNEUROSCI.2700-16.2017
关键词
fMRI; reinforcement learning; reward prediction error; working memory
资金
- National Institutes of Health [NS065046, MH099078, MH080066-01]
- James S. McDonnell Foundation
- Office of Naval research [MURI N00014-16-1-2832]
- National Science Foundation [1460604]
- Division Of Behavioral and Cognitive Sci
- Direct For Social, Behav & Economic Scie [1460604] Funding Source: National Science Foundation
Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process in which reward prediction errors (RPEs) are used to update expected values of choice options. This modeling ignores the different contributions of different memory and decision-making systems thought to contribute even to simple learning. In an fMRI experiment, we investigated how working memory (WM) and incremental RL processes interact to guide human learning. WMload was manipulated by varying the number of stimuli to be learned across blocks. Behavioral results and computational modeling confirmed that learning was best explained as a mixture of two mechanisms: a fast, capacity-limited, and delay-sensitive WM process together with slower RL. Model-based analysis of fMRI data showed that striatum and lateral prefrontal cortex were sensitive to RPE, as shown previously, but, critically, these signals were reduced when the learning problem was within capacity ofWM. The degree of this neural interaction related to individual differences in the use of WM to guide behavioral learning. These results indicate that the two systems do not process information independently, but rather interact during learning.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据