4.5 Article

Task offloading for vehicular edge computing with edge-cloud cooperation

期刊

出版社

SPRINGER
DOI: 10.1007/s11280-022-01011-8

关键词

Task offloading; Vehicular edge computing; Edge-cloud computing cooperation; Deep reinforcement learning; Deep Q-network

资金

  1. Project of Key Science Foundation of Yunnan Province [202101AS070007]
  2. expert workstation of Yunnan Province [202105AF150013]
  3. National Natural Science Foundation of China [61862065, 12163004]
  4. Major Project of Science and Technology of Yunnan Province [202002AD080002]
  5. Project of Science and Technology of Yunnan Province [202001AT070135]

向作者/读者索取更多资源

This paper proposes an efficient offloading scheme based on deep reinforcement learning for VEC with edge-cloud computing cooperation, aiming to meet low latency demands for computation-intensive vehicular applications. The scheme integrates the computation resources of vehicles, edge servers, and the cloud server to minimize the average processing delay of tasks.
Vehicular edge computing (VEC) is emerging as a novel computing paradigm to meet low latency demands for computation-intensive vehicular applications. However, most existing offloading schemes do not take the dynamic edge-cloud computing environment into account, resulting in high delay performance. In this paper, we propose an efficient offloading scheme based on deep reinforcement learning for VEC with edge-cloud computing cooperation, where computation-intensive tasks can be executed locally or can be offloaded to an edge server, or a cloud server. By jointly considering: i) the dynamic edge-cloud computing environment; ii) fast offloading decisions, we leverage deep reinforcement learning to minimize the average processing delay of tasks by effectively integrating the computation resources of vehicles, edge servers, and the cloud server. Specifically, a deep Q-network (DQN) is used to adaptively learn optimal offloading schemes in the dynamic environment by balancing the exploration process and the exploitation process. Furthermore, the learned offloading scheme can make fast by speeding up the convergence of the training process to the offloading scheme can be quickly learned by speeding up the convergence of the training process of DQN, which is good for fast offloading decisions. We conduct extensive simulation experiments and the experimental results show that the proposed offloading scheme can achieve a good performance.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据