4.7 Article

SMIX(λ): Enhancing Centralized Value Functions for Cooperative Multiagent Reinforcement Learning

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2021.3089493

关键词

Training; Optimization; Reinforcement learning; Nash equilibrium; Task analysis; History; Learning systems; Deep reinforcement learning (DRL); multiagent reinforcement learning (MARL); multiagent systems; StarCraft Multiagent Challenge (SMAC)

向作者/读者索取更多资源

This article proposes a method named SMIX(lambda) to learn a stable and generalizable centralized value function (CVF) through off-policy training. By using the lambda-return as a proxy for computing the temporal difference error, the modified QMIX network structure is adopted to train the model. Experiments demonstrate the significant advantages of the proposed SMIX(lambda) method in multiagent reinforcement learning.
Learning a stable and generalizable centralized value function (CVF) is a crucial but challenging task in multiagent reinforcement learning (MARL), as it has to deal with the issue that the joint action space increases exponentially with the number of agents in such scenarios. This article proposes an approach, named SMIX(lambda), that uses an off-policy training to achieve this by avoiding the greedy assumption commonly made in CVF learning. As importance sampling for such off-policy training is both computationally costly and numerically unstable, we proposed to use the lambda-return as a proxy to compute the temporal difference (TD) error. With this new loss function objective, we adopt a modified QMIX network structure as the base to train our model. By further connecting it with the Q(lambda) approach from a unified expectation correction viewpoint, we show that the proposed SMIX(lambda) is equivalent to Q(lambda) and hence shares its convergence properties, while without being suffered from the aforementioned curse of dimensionality problem inherent in MARL. Experiments on the StarCraft Multiagent Challenge (SMAC) benchmark demonstrate that our approach not only outperforms several state-of-the-art MARL methods by a large margin but also can be used as a general tool to improve the overall performance of other centralized training with decentralized execution (CTDE)-type algorithms by enhancing their CVFs.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据