4.7 Article

Deep Reinforcement Learning-Based Adaptive Computation Offloading for MEC in Heterogeneous Vehicular Networks

Journal

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
Volume 69, Issue 7, Pages 7916-7929

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TVT.2020.2993849

Keywords

Task analysis; Delays; Wireless communication; Bandwidth; Energy consumption; Servers; Data communication; Vehicular networks; wireless channels; reinforcement learning; task offloading

Funding

  1. National Nature Science Foundation of China [61572229, 6171101066, 61872161]
  2. Jilin Provincial Science and Technology Development Foundation of China [20170204074GX, 20180201068GX, 20180101057JC]
  3. Jilin Provincial International Cooperation Foundation of China [20180414015GH]
  4. Jilin Provincial EducationDepartment Scientific Research Planning Foundation of China [JJKH20200618KJ]
  5. Jilin Provincial Science and Technology Planning Project of China [2018C036-1]

Ask authors/readers for more resources

The vehicular network needs efficient and reliable data communication technology to maintain low latency. It is very challenging to minimize the energy consumption and data communication delay while the vehicle is moving and wireless channels and bandwidth are time-varying. With the help of the emerging mobile edge computing (MEC) server, vehicles and roadside units (RSUs) can offload computing tasks to MEC associated with base station (BS). However, the environment for offloading tasks to MEC, e.g., wireless channel states and available bandwidth, is unstable. Therefore, ensuring the efficiency of computation offloading under such an unstable environment is a challenge. In this work, we design a task computation offloading model in a heterogeneous vehicular network; this model takes into account multiple stochastic tasks, the variety of wireless channels and bandwidth. To obtain the tradeoff between the cost of energy consumption and the cost of data transmission delay and avoid curse of dimensionality caused by the complexity of the large action space, we propose an adaptive computation offloading method based on deep reinforcement learning (ACORL) that can address the continuous action space. ACORL adds the Ornstein-Uhlenbeck (OU) noise vector to the action space with different factors for each action to validate the exploration. Multi transmission equipment can execute local processing and computation offloading to MEC. Nevertheless, ACORL considers the variety of wireless channels and available bandwidth between adjacent time slots. The numerical results illustrate that the proposed ACORL can effectively learn the optimal policy, which outperforms the Dueling DQN and greedy policy in the stochastic environment.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available