4.7 Article

Multi-Agent Deep Reinforcement Learning Multiple Access for Heterogeneous Wireless Networks With Imperfect Channels

期刊

IEEE TRANSACTIONS ON MOBILE COMPUTING
卷 21, 期 10, 页码 3718-3730

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TMC.2021.3057826

关键词

MAC protocol; heterogeneous wireless networks; imperfect channel; multi-agent; deep reinforcement learning

资金

  1. University Grant Committee of the Hong Kong Special Administrative Region, China [14200417]

向作者/读者索取更多资源

This paper investigates a distributed deep reinforcement learning (DRL) based MAC protocol design for heterogeneous wireless networks with imperfect channels. The proposed feedback recovery mechanism and two-stage action selection mechanism effectively tackle the challenges of noisy channels and coherent decision making among multiple agents.
This paper investigates a futuristic spectrum sharing paradigm for heterogeneous wireless networks with imperfect channels. In the heterogeneous networks, multiple wireless networks adopt different medium access control (MAC) protocols to share a common wireless spectrum and each network is unaware of the MACs of others. This paper aims to design a distributed deep reinforcement learning (DRL) based MAC protocol for a particular network, and the objective of this network is to achieve a global a-fairness objective. In the conventional DRL framework, feedback/reward given to the agent is always correctly received, so that the agent can optimize its strategy based on the received reward. In our wireless application where the channels are noisy, the feedback/ reward (i.e., the ACK packet) may be lost due to channel noise and interference. Without correct feedback, the agent (i.e., the network user) may fail to find a good solution. Moreover, in the distributed protocol, each agent makes decisions on its own. It is a challenge to guarantee that the multiple agents will make coherent decisions and work together to achieve the same objective, particularly in the face of imperfect feedback channels. To tackle the challenge, we put forth (i) a feedback recovery mechanism to recover missing feedback information, and (ii) a two-stage action selection mechanism to aid coherent decision making to reduce transmission collisions among the agents. Extensive simulation results demonstrate the effectiveness of these two mechanisms. Last but not least, we believe that the feedback recovery mechanism and the two-stage action selection mechanism can also be used in general distributed multi-agent reinforcement learning problems in which feedback information on rewards can be corrupted.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据