4.8 Article

Distributed Resource Scheduling for Large-Scale MEC Systems: A Multiagent Ensemble Deep Reinforcement Learning With Imitation Acceleration

期刊

IEEE INTERNET OF THINGS JOURNAL
卷 9, 期 9, 页码 6597-6610

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2021.3113872

关键词

Task analysis; Training; Servers; Resource management; Reinforcement learning; Internet of Things; Processor scheduling; Distributed deep reinforcement learning (DRL); imitation learning; Levy flight; multiagent reinforcement learning; resource scheduling

资金

  1. National Natural Science Foundation of China (NSFC) [41604117, 41904127, 61620106011, U1705263]
  2. Hunan Provincial Natural Science Foundation of China [2020JJ4428, 2020JJ5105, 2021JJ30455]
  3. Hunan Provincial Science Technology Project Foundation [2018TP1018, 2018RS3065]

向作者/读者索取更多资源

In this study, we propose a distributed intelligent resource scheduling framework, named DIRS, for large-scale mobile edge computing systems. The framework aims to minimize task latency and energy consumption for Internet of Things devices. It includes centralized training and distributed decision making, utilizing a novel multiagent ensemble-assisted distributed deep reinforcement learning architecture, action refinement, and an imitation acceleration scheme. Simulation results demonstrate the efficiency and superiority of the proposed framework compared to existing benchmark schemes.
In large-scale mobile edge computing (MEC) systems, the task latency, and energy consumption are important for massive resource-consuming and delay-sensitive Internet of Things Devices (IoTDs). Against this background, we propose a distributed intelligent resource scheduling (DIRS) framework to minimize the sum of task latency and energy consumption for all IoTDs, which can be formulated as a mixed-integer nonlinear programming. The DIRS framework includes centralized training relying on the global information and distributed decision making by each agent deployed in each MEC server. Specifically, we first introduce a novel multiagent ensemble-assisted distributed deep reinforcement learning (DRL) architecture, which can simplify the overall neural network structure of each agent by partitioning the state space and also improve the performance of a single agent by combining decisions of all the agents. Second, we apply action refinement to enhance the exploration ability of the proposed DIRS framework, where the near-optimal state-action pairs are obtained by a novel Levy flight search. Finally, an imitation acceleration scheme is presented to pretrain all the agents, which can significantly accelerate the learning process of the proposed framework through learning the professional experience from a small amount of demonstration data. The simulation results in three typical scenarios demonstrate that the proposed DIRS framework is efficient and outperforms the existing benchmark schemes.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据