4.7 Article

A gradient-based approach for adversarial attack on deep learning-based network intrusion detection systems

期刊

APPLIED SOFT COMPUTING
卷 137, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.asoc.2023.110173

关键词

Network intrusion detection; Network traffic classification; Deep learning; Machine learning; Adversarial attack

向作者/读者索取更多资源

Intrusion detection systems play a crucial role in defending networks against security threats. Deep neural networks have shown excellent performance in intrusion detection, but they are vulnerable to adversarial attacks. This paper proposes a new approach using Jacobian Saliency Map to generate adversarial examples for deep learning-based malicious network activity classification. The experiments demonstrate that the proposed method achieves better performance with fewer features compared to other attacks.
Intrusion detection systems are an essential part of any cybersecurity architecture. These systems are critical in defending networks against a variety of security threats. In recent years, deep neural networks have proved their performance and efficiency in various machine learning tasks, including intrusion detection. However, it is shown that deep learning models are highly vulnerable to adver-sarial attacks. This paper proposes a new approach for performing an adversarial attack against deep learning-based malicious network activity classification. We use the Jacobian Saliency Map to find the best group of features, with different features and perturbation magnitude, to generate adversarial examples. We evaluate our method on three CIC-IDS2017, CIC-IDS2018, and CIC-DDoS2019 datasets. Our experiments show that our proposed method can achieve better performance while using fewer features in adversarial sample generation than other attacks that depend on a higher number of features. Our technique can generate adversarial samples for more than 18% of samples in CIC-IDS2017, 15% of samples in CIC-IDS2018, and 14% of samples in CIC-DDoS2019, using only three features and 0.1 as the perturbation magnitude. We do a deeper analysis of the attack based on its parameters, distance metrics, and the target model performance. Also, an evaluation model with three criteria, including success rates of the best feature sets, average confidence of the adversarial class, and adversarial samples transferability, is used in our analysis.(c) 2023 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据