4.7 Article

Deep Reinforcement Learning Based Resource Management for Multi-Access Edge Computing in Vehicular Networks

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TNSE.2020.2978856

Keywords

Servers; Resource management; Wireless fidelity; Task analysis; Computational modeling; Quality of service; Radio spectrum management; Vehicular networks; multi-access edge computing; multi-dimensional resource management; deep reinforcement learning; DDPG

Funding

  1. Natural Sciences and Engineering Research Council (NSERC) of Canada

Ask authors/readers for more resources

In this paper, we study joint allocation of the spectrum, computing, and storing resources in a multi-access edge computing (MEC)-based vehicular network. To support different vehicular applications, we consider two typical MEC architectures and formulate multi-dimensional resource optimization problems accordingly, which are usually with high computation complexity and overlong problem-solving time. Thus, we exploit reinforcement learning (RL) to transform the two formulated problems and solve them by leveraging the deep deterministic policy gradient (DDPG) and hierarchical learning architectures. Via off-line training, the network dynamics can be automatically learned and appropriate resource allocation decisions can be rapidly obtained to satisfy the quality-of-service (QoS) requirements of vehicular applications. From simulation results, the proposed resource management schemes can achieve high delay/QoS satisfaction ratios.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available