Journal
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
Volume 13, Issue 4, Pages 2000-2011Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TWC.2014.022014.130840
Keywords
Radio access networks; base stations; sleeping mode; green communications; energy saving; reinforcement learning; transfer learning; actor-critic algorithm
Funding
- National Basic Research Program of China (973Green) [2012CB316000]
- Key (Key grant) Project of Chinese Ministry of Education [313053]
- Key Technologies R&D Program of China [2012BAH75F01]
- grant of Investing for the Future program of France ANR [ANR-10-LABX-07-01]
Ask authors/readers for more resources
Recent works have validated the possibility of improving energy efficiency in radio access networks (RANs), achieved by dynamically turning on/off some base stations (BSs). In this paper, we extend the research over BS switching operations, which should match up with traffic load variations. Instead of depending on the dynamic traffic loads which are still quite challenging to precisely forecast, we firstly formulate the traffic variations as a Markov decision process. Afterwards, in order to foresightedly minimize the energy consumption of RANs, we design a reinforcement learning framework based BS switching operation scheme. Furthermore, to speed up the ongoing learning process, a transfer actor-critic algorithm (TACT), which utilizes the transferred learning expertise in historical periods or neighboring regions, is proposed and provably converges. In the end, we evaluate our proposed scheme by extensive simulations under various practical configurations and show that the proposed TACT algorithm contributes to a performance jumpstart and demonstrates the feasibility of significant energy efficiency improvement at the expense of tolerable delay performance.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available