4.3 Article

The Outcome-Representation Learning Model: A Novel Reinforcement Learning Model of the Iowa Gambling Task

Journal

COGNITIVE SCIENCE
Volume 42, Issue 8, Pages 2534-2561

Publisher

WILEY
DOI: 10.1111/cogs.12688

Keywords

Computational modeling; Reinforcement learning; Substance use; Iowa Gambling Task; Bayesian data analysis; Amphetamine; Heroin; Cannabis

Funding

  1. National Institute on Drug Abuse and Fogarty International Center [R01DA021421]

Ask authors/readers for more resources

The Iowa Gambling Task (IGT) is widely used to study decision-making within healthy and psychiatric populations. However, the complexity of the IGT makes it difficult to attribute variation in performance to specific cognitive processes. Several cognitive models have been proposed for the IGT in an effort to address this problem, but currently no single model shows optimal performance for both short- and long-term prediction accuracy and parameter recovery. Here, we propose the Outcome-Representation Learning (ORL) model, a novel model that provides the best compromise between competing models. We test the performance of the ORL model on 393 subjects' data collected across multiple research sites, and we show that the ORL reveals distinct patterns of decision-making in substance-using populations. Our work highlights the importance of using multiple model comparison metrics to make valid inference with cognitive models and sheds light on learning mechanisms that play a role in underweighting of rare events.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available