4.1 Article

Attainability of boundary points under reinforcement learning

Journal

GAMES AND ECONOMIC BEHAVIOR
Volume 53, Issue 1, Pages 110-125

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.geb.2004.08.002

Keywords

learning in games; reinforcement learning; stochastic approximation; replicator dynamics

Categories

Ask authors/readers for more resources

This paper investigates the properties of the most common form of reinforcement learning (the basic model of Erev and Roth) [Amer. Econ. Rev. 88 (1998) 848-881]. Stochastic approximation theory has been used to analyse the local stability of fixed points under this learning process. However, as we show, when such points are on the boundary of the state space, for example, pure strategy equilibria, standard results from the theory of stochastic approximation do not apply. We offer what we believe to be the correct treatment of boundary points, and provide a new and more general result: this model of learning converges with zero probability to fixed points which are unstable under the Maynard Smith or adjusted version of the evolutionary replicator dynamics. For two player games these are the fixed points that are linearly unstable under the standard replicator dynamics. (c) 2004 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available