4.7 Article

Deep Reinforcement Learning for Dynamic Multichannel Access in Wireless Networks

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCCN.2018.2809722

Keywords

Multichannel access; cognitive sensing; POMDP; DQN; reinforcement learning; online learning

Funding

  1. Defense Advanced Research Projects Agency (DARPA) [HR001117C0053]
  2. NSF [1423624]
  3. Division of Computing and Communication Foundations
  4. Direct For Computer & Info Scie & Enginr [1423624] Funding Source: National Science Foundation

Ask authors/readers for more resources

We consider a dynamic multichannel access problem, where multiple correlated channels follow an unknown joint Markov model and users select the channel to transmit data. The objective is to find a policy that maximizes the expected long-term number of successful transmissions. The problem is formulated as a partially observable Markov decision process with unknown system dynamics. To overcome the challenges of unknown dynamics and prohibitive computation, we apply the concept of reinforcement learning and implement a deep Q-network (DQN). We first study the optimal policy for fixed-pattern channel switching with known system dynamics and show through simulations that DQN can achieve the same optimal performance without knowing the system statistics. We then compare the performance of DQN with a Myopic policy and a Whittle Index-based heuristic through both more general simulations as well as real data trace and show that DQN achieves near-optimal performance in more complex situations. Finally, we propose an adaptive DQN approach with the capability to adapt its learning in time-varying scenarios.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available