4.7 Article Proceedings Paper

A two-layered multi-agent reinforcement learning model and algorithm

期刊

JOURNAL OF NETWORK AND COMPUTER APPLICATIONS
卷 30, 期 4, 页码 1366-1376

出版社

ACADEMIC PRESS LTD ELSEVIER SCIENCE LTD
DOI: 10.1016/j.jnca.2006.09.004

关键词

reinforcement learning; multi-agent; layered model

向作者/读者索取更多资源

Multi-agent reinforcement learning technologies are mainly investigated from two perspectives of the concurrence and the game theory. The former chiefly applies to cooperative multi-agent systems, while the latter usually applies to coordinated multi-agent systems. However, there exist such problems as the credit assignment and the multiple Nash equilibriums for agents with them. In this paper, we propose a new multi-agent reinforcement learning model and algorithm LMRL from a layer perspective. LMRL model is composed of an off-line training layer that employs a single agent reinforcement learning technology to acquire stationary strategy knowledge and an online interaction layer that employs a multi-agent reinforcement learning technology and the strategy knowledge that can be revised dynamically to interact with the environment. An agent with LMRL can improve its generalization capability, adaptability and coordination ability. Experiments show that the performance of LMRL can be better than those of a single agent reinforcement learning and Nash-Q. (c) 2006 Published by Elsevier Ltd.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据