4.5 Article

Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices

Journal

SOCIAL COGNITIVE AND AFFECTIVE NEUROSCIENCE
Volume 15, Issue 6, Pages 695-707

Publisher

OXFORD UNIV PRESS
DOI: 10.1093/scan/nsaa089

Keywords

social decision-making; computational modeling; reinforcement learning; learning rate; prediction error; model-based fMRI

Funding

  1. International Research Training Groups `CINACS' [DFG GRK 1247]
  2. Research Promotion Fund (FFM) for young scientists of the University Medical Center Hamburg-Eppendorf
  3. National Natural Science Foundation of China [NSFC 71801110]
  4. Ministry of Education in China Project of Humanities and Social Sciences [MOE 18YJC630268]
  5. China Postdoctoral Science Foundation [2018M633270]
  6. Bernstein Award for Computational Neuroscience [BMBF 01GQ1006]
  7. Collaborative Research Center `Cross-modal learning' [DFG TRR 169]
  8. Collaborative Research in Computational Neuroscience (CRCNS) [BMBF 01GQ1603]
  9. Vienna Science and Technology Fund [WWTF VRG13-007]
  10. Austrian Science Fund [FWF P 32686]

Ask authors/readers for more resources

The recent years have witnessed a dramatic increase in the use of reinforcement learning (RL) models in social, cognitive and affective neuroscience. This approach, in combination with neuroimaging techniques such as functional magnetic resonance imaging, enables quantitative investigations into latent mechanistic processes. However, increased use of relatively complex computational approaches has led to potential misconceptions and imprecise interpretations. Here, we present a comprehensive framework for the examination of (social) decision-making with the simple Rescorla-Wagner RL model. We discuss common pitfalls in its application and provide practical suggestions. First, with simulation, we unpack the functional role of the learning rate and pinpoint what could easily go wrong when interpreting differences in the learning rate. Then, we discuss the inevitable collinearity between outcome and prediction error in RL models and provide suggestions of how to justify whether the observed neural activation is related to the prediction error rather than outcome valence. Finally, we suggest posterior predictive check is a crucial step after model comparison, and we articulate employing hierarchical modeling for parameter estimation. We aim to provide simple and scalable explanations and practical guidelines for employing RL models to assist both beginners and advanced users in better implementing and interpreting their model-based analyses.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available