4.7 Article

Policy Evaluation and Seeking for Multiagent Reinforcement Learning via Best Response

Journal

IEEE TRANSACTIONS ON AUTOMATIC CONTROL
Volume 67, Issue 4, Pages 1898-1913

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAC.2021.3085171

Keywords

Best response; multiagent reinforcement learning; policy evaluation and seeking; sink equilibrium; stochastic stability

Funding

  1. National Natural Science Foundation of China [61374034]
  2. China Scholarship Council
  3. U.S. Air Force Office of Scientific Research [FA9550-15-1-0138]

Ask authors/readers for more resources

This article introduces a metric based on game-theoretic solution concept for the evaluation, ranking, and computation of policies in multiagent learning. The method can handle dynamical behaviors in multiagent reinforcement learning and is also compatible with single-agent reinforcement learning.
Multiagent policy evaluation and seeking are long-standing challenges in developing theories for multiagent reinforcement learning (MARL), due to multidimensional learning goals, nonstationary environment, and scalability issues in the joint policy space. This article introduces two metrics grounded on a game-theoretic solution concept called sink equilibrium, for the evaluation, ranking, and computation of policies in multiagent learning. We adopt strict best response dynamics (SBRDs) to model selfish behaviors at a meta-level for MARL. Our approach can deal with dynamical cyclical behaviors (unlike approaches based on Nash equilibria and Elo ratings), and is more compatible with single-agent reinforcement learning than 0-rank, which relies on weakly better responses. We first consider settings where the difference between the largest and second largest equilibrium metric has a known lower bound. With this knowledge, we propose a class of perturbed SBRD with the following property: only policies with maximum metric are observed with nonzero probability for a broad class of stochastic games with finite memory. We then consider settings where the lower bound for the difference is unknown. For this setting, we propose a class of perturbed SBRD such that the metrics of the policies observed with nonzero probability differ from the optimal by any given tolerance. The proposed perturbed SBRD addresses the scalability issue and opponent-induced nonstationarity by fixing the strategies of others for the learning agent, and uses empirical game-theoretic analysis to estimate payoffs for each strategy profile obtained due to the perturbation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available