Journal
IEEE ACCESS
Volume 8, Issue -, Pages 85204-85215Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2020.2991773
Keywords
Edge computing; computation offload; collaborative computing; reinforcement learning; DDPG
Categories
Funding
- National Key Research and Development Program of China [2019YFB2102302]
- Beijing Natural Science Foundation [4194085]
- Construction of Industrial Internet Platform Test Bed (New Mode)
Ask authors/readers for more resources
As a mode of processing task request, edge computing paradigm can reduce task delay and effectively alleviate network congestion caused by the proliferation of Internet of things(IoT) devices compared with cloud computing. However, in the actual construction of the network, there are various edge autonomous subnets in the adjacent areas, which leads to the possibility of unbalance of server load among autonomous subnets during the peak period of task request. In this paper, a deep reinforcement learning algorithm is proposed to solve the complex computation offloading problem for the heterogeneous Edge Computing Server(ECS) collaborative computing. The problem is solved based on the real-time state of the network and the attributes of the task, which adopts Actor Critic and Policy Gradient's Deep Deterministic Policy Gradient(DDPG) to make optimized decisions of computation offloading. Considering multi-task, the heterogeneity of edge subnet and mobility of edge devices, the proposed algorithm can learn the network environment and generate the computation offloading decision to minimize the task delay.The simulation results show that the proposed DDPG-based algorithm is competitive compared with the Deep Q Network(DQN) algorithm and Asynchronous Advantage Actor-Critic(A3C) algorithm. Moreover, the optimal solutions are leveraged to analyze the influence of edge network parameters on task delay.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available