3.8 Proceedings Paper

A Reinforcement Learning Aided Decoupled RAN Slicing Framework for Cellular V2X

Publisher

IEEE
DOI: 10.1109/GLOBECOM42002.2020.9348084

Keywords

RAN Slicing; Decoupled Access; Reinforcement Learning; V2X

Funding

  1. National Natural Science Foundation of China [61871221]
  2. Natural Science Foundation of Jiangsu Province Youth Project [BK20180329]
  3. Innovation and Entrepreneurship of Jiangsu Provience High-level Talent Program
  4. Summit of the Six Top Talents Program of Jiangsu Province
  5. Natural Sciences and Engineering Research Council of Canada (NSERC)

Ask authors/readers for more resources

The Uplink (UL) and Downlink (DL) decoupled cellular access through flexible cell association has attracted a lot of attention due to numerous benefits such as higher network throughput, better load balancing, and lower energy consumption, etc. In this paper, we introduce a novel reinforcement learning aided decoupled RAN access framework for Cellular Vehicle-to-Everything (V2X) communications, and propose a two-step RAN slicing approach to dynamically allocate the radio resource to V2X services in different time granularity. We derive an innovative QoS metric of V2V cellular mode by taking consideration of the bidirectional nature of V2V cellular communications. Moreover, we maximize the sum utility considering the proposed QoS metric by leveraging the Deep Deterministic Policy Gradient (DDPG) enabled RAN slicing method. Simulation results are provided to demonstrate the advance of the proposed reinforcement learning aided decoupled RAN slicing framework in achieving load balancing, maximizing total network utility and satisfying the QoS metric of Cellular V2X communications.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available