4.7 Article

alpha-Fairness-Maximizing User Association in Energy-Constrained Small Cell Networks

Journal

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
Volume 21, Issue 9, Pages 7443-7459

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TWC.2022.3158694

Keywords

Deep reinforcement learning; alpha-fairness; user association; power control; renewable energy source

Funding

  1. Institute of Information and Communications Technology Planning and Evaluation (IITP) Grant by the Korean Government through MSIT [2018-0-00958]
  2. Ministry of Science and ICT (MSIT), South Korea, under the Information Technology Research Center (ITRC) Support Program [IITP-2021-0-02048]
  3. Samsung Research Funding and Incubation Center of Samsung Electronics [SRFC-TD2003-01]

Ask authors/readers for more resources

This paper proposes a novel user association, resource allocation, and dynamic power control scheme to maximize alpha-fairness in renewable energy source-assisted small cell networks. By using a Lagrangian duality-based algorithm and a deep reinforcement learning-based dynamic power control scheme, the proposed scheme achieves significant improvement in computation time and fairness metrics.
Renewable energy source (RES)-powered base stations have received tremendous research interest in recent years because they can expand network coverage without building a power grid. This paper proposes a novel user association (UA), resource allocation (RA), and dynamic power control (PC) scheme to maximize the alpha-fairness in RES-assisted small cell networks. The alpha-fairness is a general notion that flexibly adjusts the balance between the throughput, proportional fairness, and max-min fairness according to alpha. Nevertheless, none of the existing studies has proposed UA, RA, and PC to maximize the alpha-fairness due to its NP-hardness. Furthermore, fixed-policy-based PC designs cannot consider time-varying environments (e.g., energy harvesting models and wireless channels) of the RES-assisted networks. We first provide a Lagrangian duality-based algorithm to solve the UA and RA problem for a fixed PC. Next, we propose a dynamic PC scheme based on deep reinforcement learning (DRL) that chooses the best PC considering the time-varying environments. However, because the UA and RA algorithm executed in each step of the dynamic PC requires a long computation time, we aim to accelerate the computation of the UA and RA with DRL. Inspired by the Lagrangian duality, we design a DRL-based UA and RA with a low-dimensional continuous variable by relaxing the UA variable, the cardinality of which increases exponentially with the number of base stations and users. The simulation results show that the proposed scheme achieves a 100 times shorter computation time than the optimization-based schemes by computing only two neural networks. In particular, although there have been numerous studies on the proportional fairness maximization, the proposed scheme outperforms the optimization-based schemes in the throughput, proportional fairness, and max-min fairness metrics.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available