3.8 Proceedings Paper

Deep Reinforcement Learning Based Task Scheduling in Edge Computing Networks

出版社

IEEE
DOI: 10.1109/iccc49849.2020.9238937

关键词

Edge computing; Task scheduling; Deep reinforcement learning; Latency Optimization

资金

  1. National Natural Science Foundation of China [61872044, 61502040]
  2. Beijing Municipal Program for Top Talent
  3. Beijing Municipal Program for Top Talent Cultivation [CITTCD201804055]
  4. Qinxin Talent Program of Beijing Information Science and Technology University

向作者/读者索取更多资源

With the rapid development of 5G mobile networks services, massive data explodes in the network edge. Cloud computing services suffer from long latency and huge bandwidth requirement. Edge computing has become the key technology of reducing service delay and traffic load in 5G mobile networks. However, how to intelligently schedule tasks in the edge computing environment is still a critical challenge. In this paper, we define the optimization problem of minimizing the delay for task scheduling in the cloud-edge network architecture. The problem is proved NP-hard and modeled following Markov decision process. We design a cloud-edge collaboration scheduling algorithm based on asynchronous advantage actor-critic (CECS-A3C). Simulation results show that the proposed algorithm has good convergence speed and can reduce the task processing time by an average of 28.3% and 46.1% compared with the existing DQN and RL-G algorithms, while keeping the performance scalable.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据