4.7 Article

Knowledge-Driven Service Offloading Decision for Vehicular Edge Computing: A Deep Reinforcement Learning Approach

Journal

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
Volume 68, Issue 5, Pages 4192-4203

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TVT.2019.2894437

Keywords

Internet of Vehicle; service offloading decision; multi-task; knowledge driven; deep reinforcement learning

Funding

  1. National Natural Science Foundation of China [61471063, 61671079, 61771068, 61773071]
  2. Beijing Municipal Natural Science Foundation [4182041]

Ask authors/readers for more resources

The smart vehicles construct Internet of Vehicle (IoV), which can execute various intelligent services. Although the computation capability of a vehicle is limited, multi-type of edge computing nodes provide heterogeneous resources for intelligent vehicular services. When offloading the complex service to the vehicular edge computing node, the decision for its destination should be considered according to numerous factors. This paper mostly formulate the offloading decision as a resource scheduling problem with single or multiple objective function and constraints, where some customized heuristics algorithms are explored. However, offloading multiple data dependence tasks in a complex service is a difficult decision, as an optimal solution must understand the resource requirement, the access network, the user mobility, and importantly the data dependence. Inspired by recent advances in machine learning, we propose a knowledge driven (KD) service offloading decision framework for IoV, which provides the optimal policy directly from the environment. We formulate the offloading decision for the multiple tasks as a long-term planning problem, and explore the recent deep reinforcement learning to obtain the optimal solution. It can scruple the future data dependence of the following tasks when making decision for a current task from the learned offloading knowledge. Moreover, the framework supports the pre-training at the powerful edge computing node and continually online learning when the vehicular service is executed, so that it can adapt the environment changes and can learn policy that are sensible in foresight. The simulation results show that KD service offloading decision converges quickly, adapts to different conditions, and outperforms a greedy offloading decision algorithm.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available