4.4 Article

Distributed Deep Reinforcement Learning for Functional Split Control in Energy Harvesting Virtualized Small Cells

Journal

IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING
Volume 6, Issue 4, Pages 626-640

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSUSC.2020.3025139

Keywords

Batteries; Heuristic algorithms; Learning (artificial intelligence); Switches; Energy consumption; Energy harvesting; Power demand; Deep reinforcement learning; edge computing; energy harvesting; flexible functional splits; MEC; multi-agent reinforcement learning; virtualized small cells

Funding

  1. European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant [675891]
  2. Spanish MINECO Grant [TEC2017-88373-R]

Ask authors/readers for more resources

To meet the growing demand for network capacity, mobile network operators are deploying dense infrastructures of small cells, leading to increased power consumption and environmental impact. Recent trends show a shift towards powering mobile networks with harvested ambient energy for environmental and cost benefits.
To meet the growing quest for enhanced network capacity, mobile network operators (MNOs) are deploying dense infrastructures of small cells. This, in turn, increases the power consumption of mobile networks, thus impacting the environment. As a result, we have seen a recent trend of powering mobile networks with harvested ambient energy to achieve both environmental and cost benefits. In this paper, we consider a network of virtualized small cells (vSCs) powered by energy harvesters and equipped with rechargeable batteries, which can opportunistically offload baseband (BB) functions to a grid-connected edge server depending on their energy availability. We formulate the corresponding grid energy and traffic drop rate minimization problem, and propose a distributed deep reinforcement learning (DDRL) solution. Coordination among vSCs is enabled via the exchange of battery state information. The evaluation of the network performance in terms of grid energy consumption and traffic drop rate confirms that enabling coordination among the vSCs via knowledge exchange achieves a performance close to the optimal. Numerical results also confirm that the proposed DDRL solution provides higher network performance, better adaptation to the changing environment, and higher cost savings with respect to a tabular multi-agent reinforcement learning (MRL) solution used as a benchmark.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available