Journal
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING
Volume 7, Issue 4, Pages 2416-2428Publisher
IEEE COMPUTER SOC
DOI: 10.1109/TNSE.2020.2978856
Keywords
Servers; Resource management; Wireless fidelity; Task analysis; Computational modeling; Quality of service; Radio spectrum management; Vehicular networks; multi-access edge computing; multi-dimensional resource management; deep reinforcement learning; DDPG
Funding
- Natural Sciences and Engineering Research Council (NSERC) of Canada
Ask authors/readers for more resources
In this paper, we study joint allocation of the spectrum, computing, and storing resources in a multi-access edge computing (MEC)-based vehicular network. To support different vehicular applications, we consider two typical MEC architectures and formulate multi-dimensional resource optimization problems accordingly, which are usually with high computation complexity and overlong problem-solving time. Thus, we exploit reinforcement learning (RL) to transform the two formulated problems and solve them by leveraging the deep deterministic policy gradient (DDPG) and hierarchical learning architectures. Via off-line training, the network dynamics can be automatically learned and appropriate resource allocation decisions can be rapidly obtained to satisfy the quality-of-service (QoS) requirements of vehicular applications. From simulation results, the proposed resource management schemes can achieve high delay/QoS satisfaction ratios.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available