Journal
JOURNAL OF MATHEMATICAL PSYCHOLOGY
Volume 66, Issue -, Pages 59-69Publisher
ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jmp.2015.03.006
Keywords
Reinforcement learning; History dependence; Regression model; Model-based analysis
Categories
Funding
- KAKENHI [24700238, 26118506]
- Grants-in-Aid for Scientific Research [24700238, 26118506] Funding Source: KAKEN
Ask authors/readers for more resources
Reinforcement learning (RL) models have been widely used to analyze the choice behavior of humans and other animals in a broad range of fields, including psychology and neuroscience. Linear regression-based models that explicitly represent how reward and choice history influences future choices have also been used to model choice behavior. While both approaches have been used independently, the relation between the two models has not been explicitly described. The aim of the present study is to describe this relation and investigate how the parameters in the RL model mediate the effects of reward and choice history on future choices. To achieve these aims, we performed analytical calculations and numerical simulations. First, we describe a special case in which the RL and regression models can provide equivalent predictions of future choices. The general properties of the RL model are discussed as a departure from this special case. We clarify the role of the RL-model parameters, specifically, the learning rate, inverse temperature, and outcome value (also referred to as the reward value, reward sensitivity, or motivational value), in the formation of history dependence. (C) 2015 The Author. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available