4.8 Article

Computation Offloading Method Using Stochastic Games for Software-Defined-Network-Based Multiagent Mobile Edge Computing

期刊

IEEE INTERNET OF THINGS JOURNAL
卷 10, 期 20, 页码 17620-17634

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2023.3277541

关键词

Computation offloading; mobile edge computing (MEC); multiagent reinforcement learning (MARL); resource allocation; stochastic game

向作者/读者索取更多资源

In the scenario of Industry 4.0, mobile smart devices face challenges in processing massive amounts of data. To address this issue, a software-defined network-based mobile edge computing system is proposed to offload computation tasks to edge servers, reducing processing latency and energy consumption. A stochastic game-based computation offloading model is established, demonstrating the achievement of Nash Equilibrium. The proposed stochastic game-based resource allocation algorithm with prioritized experience replays (SGRA-PERs) outperforms other algorithms in reducing processing delay and energy consumption, even in large-scale MEC systems.
In the scenario of Industry 4.0, mobile smart devices (SDs) on production lines have to process massive amounts of data. These computing tasks sometimes far exceed the computing capability of SDs and require lots of energy and time to process. How to effectively reduce energy consumption and latency is necessary to be solved. To this end, we first propose a software-defined network (SDN)-based mobile edge computing (MEC) system. In the MEC system, SDs can offload computation tasks to edge servers to decrease the processing latency and avoid the waste of energy. At the same time, taking advantage of SDN's programmability, scalability, and isolation of the control plane and the data plane, an SDN controller can manage edge devices within the MEC system. Second, based on a stochastic game, we study the computation offloading and resource allocation problems in the MEC system and establish a stochastic game-based computation offloading model. Furthermore, we prove that the multiuser stochastic game in this system can achieve Nash Equilibrium. We further consider each SD as an independent agent and design a stochastic game-based resource allocation algorithm with prioritized experience replays (SGRA-PERs) to minimize energy consumption and processing latency with Multiagent Reinforcement Learning. Experiment results demonstrate that the proposed SGRA-PER is superior to MADDPG, Q -Mix, and MAPPO algorithms, which can significantly reduce the processing delay and energy consumption with dynamic resource allocation. Moreover, SGRA-PER can still keep a higher performance under the increase of SDs, which can be applied in a large-scale MEC system.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据