4.7 Article

Knowledge-Driven Service Offloading Decision for Vehicular Edge Computing: A Deep Reinforcement Learning Approach

期刊

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
卷 68, 期 5, 页码 4192-4203

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TVT.2019.2894437

关键词

Internet of Vehicle; service offloading decision; multi-task; knowledge driven; deep reinforcement learning

资金

  1. National Natural Science Foundation of China [61471063, 61671079, 61771068, 61773071]
  2. Beijing Municipal Natural Science Foundation [4182041]

向作者/读者索取更多资源

The smart vehicles construct Internet of Vehicle (IoV), which can execute various intelligent services. Although the computation capability of a vehicle is limited, multi-type of edge computing nodes provide heterogeneous resources for intelligent vehicular services. When offloading the complex service to the vehicular edge computing node, the decision for its destination should be considered according to numerous factors. This paper mostly formulate the offloading decision as a resource scheduling problem with single or multiple objective function and constraints, where some customized heuristics algorithms are explored. However, offloading multiple data dependence tasks in a complex service is a difficult decision, as an optimal solution must understand the resource requirement, the access network, the user mobility, and importantly the data dependence. Inspired by recent advances in machine learning, we propose a knowledge driven (KD) service offloading decision framework for IoV, which provides the optimal policy directly from the environment. We formulate the offloading decision for the multiple tasks as a long-term planning problem, and explore the recent deep reinforcement learning to obtain the optimal solution. It can scruple the future data dependence of the following tasks when making decision for a current task from the learned offloading knowledge. Moreover, the framework supports the pre-training at the powerful edge computing node and continually online learning when the vehicular service is executed, so that it can adapt the environment changes and can learn policy that are sensible in foresight. The simulation results show that KD service offloading decision converges quickly, adapts to different conditions, and outperforms a greedy offloading decision algorithm.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据