4.6 Article

Task offloading method of edge computing in internet of vehicles based on deep reinforcement learning

出版社

SPRINGER
DOI: 10.1007/s10586-021-03532-9

关键词

Edge computing; Task offloading; Deep reinforcement learning; Internet of vehicles

资金

  1. Graduate Students' Innovative Plan Program [2020YJSB079]
  2. National Natural Science Foundation of China [61571328]
  3. Tianjin Key Natural Science Foundation [13JCZDJC34600, 18JCZDJC96800, 18JCYBJC19300]
  4. Major projects of science and technology in Tianjin [15ZXDSGX00050, 16ZXFWGX00010, 17YFZ CGX00360]
  5. Training plan of Tianjin University Innovation Team [TD12-5016, TD13-5025]
  6. Key Subject Foundation of Tianjin [15JCYBJC46500]
  7. Training plan of Tianjin 131 Innovation Talent Team [TD2015-23]

向作者/读者索取更多资源

The paper discusses the higher requirements for network bandwidth and delay in the emerging Internet of Vehicles (IoV) technology compared to traditional network tasks. It focuses on the important issue of completing task offloading and calculation with lower task delay and lower energy consumption. By considering multiple MEC servers and proposing a dynamic task offloading scheme based on deep reinforcement learning, the paper improves upon the traditional Q-Learning algorithm and shows better performance in delay, energy consumption, and total system overhead through simulation results.
Compared with the traditional network tasks, the emerging Internet of Vehicles (IoV) technology has higher requirements for network bandwidth and delay. However, due to the limitation of computing resources and battery capacity of existing mobile devices, it is hard to meet the above requirements. How to complete task offloading and calculation with lower task delay and lower energy consumption is the most important issue. Aiming at the task offloading system of the IoV, this paper considers the situation of multiple MEC servers when modeling, and proposes a dynamic task offloading scheme based on deep reinforcement learning. It improves the traditional Q-Learning algorithm and combines deep learning with reinforcement learning to avoid dimensional disaster in the Q-Learning algorithm. Simulation results show that the proposed algorithm has better performance on delay, energy consumption, and total system overhead under the different number of tasks and wireless channel bandwidth.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据