4.7 Article

Task offloading of cooperative intrusion detection system based on Deep Q Network in mobile edge computing

期刊

EXPERT SYSTEMS WITH APPLICATIONS
卷 206, 期 -, 页码 -

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2022.117860

关键词

Edge computing; Intrusion detection; Task offloading; Reinforcement learning; Deep Q Network

资金

  1. Shaanxi Provincial Emergency Management Department [2021HZ1139]
  2. Xian Science and Technology Bureau [21XJZZ0024]
  3. Department of education of Shaanxi Province [18JK0323]
  4. Social Science Foundation of Shaanxi Province [2019M026]
  5. Shaanxi Federation of Social Sciences [2019C080]
  6. Thirteenth Five-Year Plan Project in Shaanxi Province [SGH18H089]
  7. Xian Polytechnic University Social Service Research Project [2021ZSFP10]

向作者/读者索取更多资源

In this paper, a collaborative intrusion detection system architecture applied to mobile edge computing is proposed, along with a task offloading scheduling algorithm based on Deep Q Network to address packet loss issues under heavy traffic. Experiments show that the proposed scheme outperforms comparative algorithms in terms of response time, energy consumption, and packet loss rate.
Due to the performance and resource limitations of wireless devices at the edge of the network, the intrusion detection system deployed on the mobile edge network will cause severe packet loss when faced with large traffic. Based on this, a collaborative intrusion detection system (CIDS) architecture applied to mobile edge computing is proposed, which can offload part of the detection tasks to an intrusion detection system with better performance and resources on the edge server. On this basis, a task offloading scheduling algorithm based on Deep Q Network (DQN) is proposed. First, the time delay, energy consumption, and offloading decision models are established. Then, the task scheduling process is described as a Markov decision process and the relevant space and value function are established. Finally, the problem of excessive state and action space in Q-learning is solved by the Deep Q Network. Experiments have shown that our proposed scheme enables the system to have advantages over the comparative algorithms in terms of response time, energy consumption, and packet loss rate, etc..

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据