4.6 Article

Dependency-Aware Computation Offloading in Mobile Edge Computing: A Reinforcement Learning Approach

Journal

IEEE ACCESS
Volume 7, Issue -, Pages 134742-134753

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2019.2942052

Keywords

Task analysis; Mobile handsets; Servers; Energy consumption; Computational modeling; Adaptation models; Mobile applications; Mobile edge computing; offloading; resource allocation; reinforcement learning; task dependency

Funding

  1. National Natural Science Foundation of China [61701074, 61772480, 61402425, 61673354]
  2. Fundamental Research Funds for the Central Universities, China University of Geosciences, Wuhan [G1323541861]
  3. Sichuan Province Application Fundamental Research Project [2018JY0379]

Ask authors/readers for more resources

Mobile edge computing (MobEC) builds an Information Technology (IT) service environment to enable cloud-computing capabilities at the edge of mobile networks. To tackle the restrictions in the battery power and computation capability of mobile devices, task offloading for using MobEC is developed and used to reduce the service latency and to ensure high service efficiency. However, most of the existing schemes only focus on one-shot offloading, while taking less into consideration the task dependency. It is urgently needed a more comprehensive and adaptive way to take both the energy constraint and the inherent dependency of tasks into account, since modern communication networks have increasingly become complicated and dynamic. To this end, in this paper, we are motivated to study the problem of dependency-aware task offloading decision in MobEC, aiming at minimizing the execution time for mobile applications with constraints on energy consumption. To solve this problem, we propose a model-free approach based on reinforcement learning (RL), i.e., a Q-learning approach that adaptively learns to optimize the offloading decision and energy consumption jointly by interacting with the network environment. Simulation results show that our RL-based approach is able to achieve significant reduction on the total execution time with comparably less energy consumption.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available