4.7 Article

Multi-Agent Deep Reinforcement Learning Multiple Access for Heterogeneous Wireless Networks With Imperfect Channels

Journal

IEEE TRANSACTIONS ON MOBILE COMPUTING
Volume 21, Issue 10, Pages 3718-3730

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TMC.2021.3057826

Keywords

MAC protocol; heterogeneous wireless networks; imperfect channel; multi-agent; deep reinforcement learning

Funding

  1. University Grant Committee of the Hong Kong Special Administrative Region, China [14200417]

Ask authors/readers for more resources

This paper investigates a distributed deep reinforcement learning (DRL) based MAC protocol design for heterogeneous wireless networks with imperfect channels. The proposed feedback recovery mechanism and two-stage action selection mechanism effectively tackle the challenges of noisy channels and coherent decision making among multiple agents.
This paper investigates a futuristic spectrum sharing paradigm for heterogeneous wireless networks with imperfect channels. In the heterogeneous networks, multiple wireless networks adopt different medium access control (MAC) protocols to share a common wireless spectrum and each network is unaware of the MACs of others. This paper aims to design a distributed deep reinforcement learning (DRL) based MAC protocol for a particular network, and the objective of this network is to achieve a global a-fairness objective. In the conventional DRL framework, feedback/reward given to the agent is always correctly received, so that the agent can optimize its strategy based on the received reward. In our wireless application where the channels are noisy, the feedback/ reward (i.e., the ACK packet) may be lost due to channel noise and interference. Without correct feedback, the agent (i.e., the network user) may fail to find a good solution. Moreover, in the distributed protocol, each agent makes decisions on its own. It is a challenge to guarantee that the multiple agents will make coherent decisions and work together to achieve the same objective, particularly in the face of imperfect feedback channels. To tackle the challenge, we put forth (i) a feedback recovery mechanism to recover missing feedback information, and (ii) a two-stage action selection mechanism to aid coherent decision making to reduce transmission collisions among the agents. Extensive simulation results demonstrate the effectiveness of these two mechanisms. Last but not least, we believe that the feedback recovery mechanism and the two-stage action selection mechanism can also be used in general distributed multi-agent reinforcement learning problems in which feedback information on rewards can be corrupted.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available