4.7 Article

Optimization for computational offloading in multi-access edge computing: A deep reinforcement learning scheme

Journal

COMPUTER NETWORKS
Volume 204, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.comnet.2021.108690

Keywords

Multi-access edge computing; Computation offloading; Markov decision process; Reinforcement learning

Funding

  1. National Natural Science Foundation [61572229, 6171101066]
  2. Jilin Provincial Science and Technology Development Foundation [20190302106GX, 20200501012GX]
  3. Jilin Province Education Department, Scientific Research Planning Foundation of China [JJKH20200618KJ]

Ask authors/readers for more resources

This study presents a computation offloading scheme based on reinforcement learning and deep reinforcement learning for handling the workloads of wireless users. The proposed scheme can learn the optimal offloading decision without priori knowledge and outperforms other algorithms in terms of performance.
Owing to their limited computing power and battery level, wireless users (WUs) can hardly handle compute-intensive workflows by the local processor. Multi-access edge computing (MEC) servers attached to base stations have ample computing power and communication resources, which can be used to address the computation tasks or workloads of WUs. In this study, we design a framework with multiple static and vehicle-assisted MEC servers to handle the workloads offloaded by WUs. For obtaining the optimal computation offloading scheme to minimize the weighted sum cost, including transmission and execution cost, energy consumption cost, and communication bandwidth cost, we model the offloading decision optimization problem as a Markov decision process (MDP). Then, we propose a partial computation offloading scheme based on reinforcement learning (RL) to address the absence of priori knowledge. The proposed scheme can learn the optimal offloading decision based on stochastic workload arrival, the changing channel state, and the dynamic distance between WUs and the edge servers. Moreover, to avoid the curse of dimensionality caused by the complex state and action spaces, we present an improved computation offloading method based on deep RL (DRL) to learn the optimal offloading policy using deep neural networks. Extensive numerical results illustrate that the proposed algorithms based on RL and DRL can autonomously learn the optimal computation offloading policy with no priori knowledge, and their performance are better than that of four baselines algorithms.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available