4.7 Article Proceedings Paper

A two-layered multi-agent reinforcement learning model and algorithm

Journal

JOURNAL OF NETWORK AND COMPUTER APPLICATIONS
Volume 30, Issue 4, Pages 1366-1376

Publisher

ACADEMIC PRESS LTD ELSEVIER SCIENCE LTD
DOI: 10.1016/j.jnca.2006.09.004

Keywords

reinforcement learning; multi-agent; layered model

Ask authors/readers for more resources

Multi-agent reinforcement learning technologies are mainly investigated from two perspectives of the concurrence and the game theory. The former chiefly applies to cooperative multi-agent systems, while the latter usually applies to coordinated multi-agent systems. However, there exist such problems as the credit assignment and the multiple Nash equilibriums for agents with them. In this paper, we propose a new multi-agent reinforcement learning model and algorithm LMRL from a layer perspective. LMRL model is composed of an off-line training layer that employs a single agent reinforcement learning technology to acquire stationary strategy knowledge and an online interaction layer that employs a multi-agent reinforcement learning technology and the strategy knowledge that can be revised dynamically to interact with the environment. An agent with LMRL can improve its generalization capability, adaptability and coordination ability. Experiments show that the performance of LMRL can be better than those of a single agent reinforcement learning and Nash-Q. (c) 2006 Published by Elsevier Ltd.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available