4.8 Article

Computation Offloading Method Using Stochastic Games for Software-Defined-Network-Based Multiagent Mobile Edge Computing

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 10, Issue 20, Pages 17620-17634

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2023.3277541

Keywords

Computation offloading; mobile edge computing (MEC); multiagent reinforcement learning (MARL); resource allocation; stochastic game

Ask authors/readers for more resources

In the scenario of Industry 4.0, mobile smart devices face challenges in processing massive amounts of data. To address this issue, a software-defined network-based mobile edge computing system is proposed to offload computation tasks to edge servers, reducing processing latency and energy consumption. A stochastic game-based computation offloading model is established, demonstrating the achievement of Nash Equilibrium. The proposed stochastic game-based resource allocation algorithm with prioritized experience replays (SGRA-PERs) outperforms other algorithms in reducing processing delay and energy consumption, even in large-scale MEC systems.
In the scenario of Industry 4.0, mobile smart devices (SDs) on production lines have to process massive amounts of data. These computing tasks sometimes far exceed the computing capability of SDs and require lots of energy and time to process. How to effectively reduce energy consumption and latency is necessary to be solved. To this end, we first propose a software-defined network (SDN)-based mobile edge computing (MEC) system. In the MEC system, SDs can offload computation tasks to edge servers to decrease the processing latency and avoid the waste of energy. At the same time, taking advantage of SDN's programmability, scalability, and isolation of the control plane and the data plane, an SDN controller can manage edge devices within the MEC system. Second, based on a stochastic game, we study the computation offloading and resource allocation problems in the MEC system and establish a stochastic game-based computation offloading model. Furthermore, we prove that the multiuser stochastic game in this system can achieve Nash Equilibrium. We further consider each SD as an independent agent and design a stochastic game-based resource allocation algorithm with prioritized experience replays (SGRA-PERs) to minimize energy consumption and processing latency with Multiagent Reinforcement Learning. Experiment results demonstrate that the proposed SGRA-PER is superior to MADDPG, Q -Mix, and MAPPO algorithms, which can significantly reduce the processing delay and energy consumption with dynamic resource allocation. Moreover, SGRA-PER can still keep a higher performance under the increase of SDs, which can be applied in a large-scale MEC system.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available