4.3 Article

Joint Resource Allocation and Computation Offloading in Mobile Edge Computing for SDN based Wireless Networks

Journal

JOURNAL OF COMMUNICATIONS AND NETWORKS
Volume 22, Issue 1, Pages 1-11

Publisher

KOREAN INST COMMUNICATIONS SCIENCES (K I C S)
DOI: 10.1109/JCN.2019.000046

Keywords

Mobile edge computing; resource allocation; software defined cellular networks; task offloading; wireless networks

Funding

  1. Beijing Natural Science Foundation [KZ201911232046]
  2. Municipal Education Committee Joint Funding Project [KZ201911232046]
  3. National Natural Science Foundation of China [61671086, 61629101, 61871041]
  4. 111 Project [B17007]

Ask authors/readers for more resources

The rapid growth of the internet usage and the distributed computing resources of edge devices create a necessity to have a reasonable controller to ensure efficient utilization of distributed computing resources in mobile edge computing (MEC). We envision the future MEC services, where quality of experience (QoE) of the services is further enhanced by software defined networks (SDNs) capabilities to reduce the application-level response time without service disruptions. SDN, which is not proposed specifically for edge computing, can in fact serve as an enabler to lower the complexity barriers involved and let the real potential of edge computing be achieved. In this paper, we investigate the task offloading and resource allocation problem in wireless MEC aiming to minimize the delay while saving the battery power of user device simultaneously. However, it is challenging to obtain an optimal policy in such a dynamic task offloading system. Learning from experience plays a vital role in time variant dynamic systems where reinforcement learning (RL) takes a long term goal into consideration besides immediate reward, which is very important for a dynamic environment. A novel software defined edge cloudlet (SDEC) based RL optimization framework is proposed to tackle the task offloading and resource allocation in wireless MEC. Specifically, Q-learning and cooperative Q-learning based reinforcement learning schemes are proposed for the intractable problem. Simulation results show that the proposed scheme achieves 31.39% and 62.10% reduction on the sum delay compared to other benchmark methods such as traditional Q-learning with a random algorithm and Q-learning with epsilon greedy.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available