3.8 Proceedings Paper

Efficient Model-based Multi-agent Reinforcement Learning via Optimistic Equilibrium Computation

期刊

出版社

JMLR-JOURNAL MACHINE LEARNING RESEARCH

关键词

-

资金

  1. Swiss National Science Foundation [SNSF 200021 172781]
  2. NCCR Automation grant [51NF40 180545]
  3. European Union's ERC [815943]

向作者/读者索取更多资源

In this study, we investigate model-based multi-agent reinforcement learning, where the environment transition model is unknown and can only be learned through interactions with the environment. We propose H-MARL (Hallucinated Multi-Agent Reinforcement Learning), an efficient algorithm that balances exploration and exploitation in a general-sum Markov game. By constructing high-probability confidence intervals and updating them based on new data, H-MARL creates an optimistic hallucinated game for the agents to compute equilibrium policies. Experimental results on an autonomous driving simulation benchmark demonstrate that H-MARL learns successful equilibrium policies with only a few interactions and significantly improves performance compared to non-optimistic exploration methods.
We consider model-based multi-agent reinforcement learning, where the environment transition model is unknown and can only be learned via expensive interactions with the environment. We propose H-MARL (Hallucinated Multi-Agent Reinforcement Learning), a novel sample-efficient algorithm that can efficiently balance exploration, i.e., learning about the environment, and exploitation, i.e., achieve good equilibrium performance in the underlying general-sum Markov game. H-MARL builds high-probability confidence intervals around the unknown transition model and sequentially updates them based on newly observed data. Using these, it constructs an optimistic hallucinated game for the agents for which equilibrium policies are computed at each round. We consider general statistical models (e.g., Gaussian processes, deep ensembles, etc.) and policy classes (e.g., deep neural networks), and theoretically analyze our approach by bounding the agents' dynamic regret. Moreover, we provide a convergence rate to the equilibria of the underlying Markov game. We demonstrate our approach experimentally on an autonomous driving simulation benchmark. H-MARL learns successful equilibrium policies after a few interactions with the environment and can significantly improve the performance compared to non-optimistic exploration methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据