4.7 Article

Decentralized Federated Reinforcement Learning for User-Centric Dynamic TFDD Control

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSTSP.2022.3221671

Keywords

Heuristic algorithms; Resource management; Quality of service; Time-frequency analysis; Interference; Fading channels; Dynamic scheduling; Dynamic TFDD; decentralized partially observable Markov decision process; federated learning; multi-agent reinforcement learning; resource allocation

Ask authors/readers for more resources

To address the challenges posed by dynamic and heterogeneous data traffic in 5G and beyond mobile networks, we proposed a learning-based dynamic time-frequency division duplexing (D-TFDD) scheme. By formulating the problem as a decentralized partially observable Markov decision process (Dec-POMDP), we optimized the uplink and downlink time-frequency resource allocation of base stations (BSs) to meet the asymmetric and heterogeneous traffic demands while reducing inter-cell interference. The proposed federated reinforcement learning (RL) algorithm, FWDDPG, enables decentralized global optimization of resource allocation through the exchange of local RL models among neighboring BSs within a federated learning framework. Simulation results demonstrate the superiority of our algorithm in terms of system sum rate compared to benchmark algorithms.
The explosive growth of dynamic and heterogeneous data traffic brings great challenges for 5G and beyond mobile networks. To enhance the network capacity and reliability, we propose a learning-based dynamic time-frequency division duplexing (D-TFDD) scheme that adaptively allocates the uplink and downlink time-frequency resources of base stations (BSs) to meet the asymmetric and heterogeneous traffic demands while alleviating the inter-cell interference. We formulate the problem as a decentralized partially observable Markov decision process (Dec-POMDP) that maximizes the long-term expected sum rate under the users' packet dropping ratio constraints. In order to jointly optimize the global resources in a decentralized manner, we propose a federated reinforcement learning (RL) algorithm named federated Wolpertinger deep deterministic policy gradient (FWDDPG) algorithm. The BSs decide their local time-frequency configurations through RL algorithms and achieve global training via exchanging local RL models with their neighbors under a decentralized federated learning framework. Specifically, to deal with the large-scale discrete action space of each BS, we adopt a DDPG-based algorithm to generate actions in a continuous space, and then utilize Wolpertinger policy to reduce the mapping errors from continuous action space back to discrete action space. Simulation results demonstrate the superiority of our proposed algorithm to the benchmark algorithms with respect to system sum rate.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available