4.6 Article

Dynamic Offloading Loading Optimization in Distributed Fault Diagnosis System with Deep Reinforcement Learning Approach

期刊

APPLIED SCIENCES-BASEL
卷 13, 期 7, 页码 -

出版社

MDPI
DOI: 10.3390/app13074096

关键词

mobile edge computing; multi-terminals offloading; mechanical fault diagnosis; reinforcement learning

向作者/读者索取更多资源

This paper proposes a novel intelligent fault diagnosis system framework that effectively addresses the problems of task processing delays and enhanced computational complexity. Reasonable resource allocation optimization improves the performance, especially in multi-terminals offloading systems. Deep reinforcement learning strategies, specifically the deep Q-learning network (DQN) and deep deterministic policy gradient (DDPG), are used to adaptively and efficiently learn the computational offloading policies. Numerical results demonstrate that both strategies outperform traditional non-learning schemes.
Artificial intelligence and distributed algorithms have been widely used in mechanical fault diagnosis with the explosive growth of diagnostic data. A novel intelligent fault diagnosis system framework that allows intelligent terminals to offload computational tasks to Mobile edge computing (MEC) servers is provided in this paper, which can effectively address the problems of task processing delays and enhanced computational complexity. As the resources at the MEC and intelligent terminals are limited, performing reasonable resource allocation optimization can improve the performance, especially for a multi-terminals offloading system. In this study, to minimize the task computation delay, we jointly optimize the local content splitting ratio, the transmission/computation power allocation, and the MEC server selection under a dynamic environment with stochastic task arrivals. The challenging dynamic joint optimization problem is formulated as a reinforcement learning (RL) problem, which is designed as the computational offloading policies to minimize the long-term average delay cost. Two deep RL strategies, deep Q-learning network (DQN) and deep deterministic policy gradient (DDPG), are adopted to learn the computational offloading policies adaptively and efficiently. The proposed DQN strategy takes the MEC selection as a unique action while using the convex optimization approach to obtain the local content splitting ratio and the transmission/computation power allocation. Simultaneously, the actions of the DDPG strategy are selected as all dynamic variables, including the local content splitting ratio, the transmission/computation power allocation, and the MEC server selection. Numerical results demonstrate that both proposed strategies perform better than the traditional non-learning schemes. The DDPG strategy outperforms the DQN strategy in all simulation cases exhibiting minimal task computation delay due to its ability to learn all variables online.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据