4.7 Article

Adversarial machine learning for network intrusion detection: A comparative study

Journal

COMPUTER NETWORKS
Volume 214, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.comnet.2022.109073

Keywords

NetworIntrusiondetection; AIadversarialrobustness; Adversarialattack; Defensetechnique; NSL-KDD; UNSW-NB15

Ask authors/readers for more resources

Intrusion detection is a key topic in cybersecurity, and machine learning is widely used in this field. This paper investigates the robustness of shallow machine learning-based intrusion detection systems against adversarial attacks, and evaluates the performance of different classifiers under different attacks.
Intrusion detection is a key topic in cybersecurity. It aims to protect computer systems and networks from intruders and malicious attacks. Traditional intrusion detection systems (IDS) follow a signature-based approach, but in the last two decades, various machine learning (ML) techniques have been strongly proposed and proven to be effective. However, ML faces several challenges, one of the most interesting being the emergence of adversarial attacks to fool the classifiers. Addressing this vulnerability is critical to prevent cybercriminals from exploiting ML flaws to bypass IDS and damage data and systems.Some research papers have studied the vulnerability of ML based IDS to adversarial attacks, however most of them focused on deep learning based classifiers. Unlike them, this paper pays more attention to shallow classifiers that are still widely used in ML-based IDS due to their maturity and simplicity of implementation. In more detail, we evaluate the robustness of 7 shallow ML-based NIDS including Adaboost, Bagging, Gradient boosting (GB), Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), Support Vector Classifier (SVC) and also a Deep Learning Network, against several adversarial attacks widely used in the state of the art (SOA). In addition, we apply a Gaussian data augmentation defense technique and measure its contribution to improving classifier robustness [1]. We conduct extensive experiments in different scenarios using the NSL-KDD benchmark dataset [2] and the UNSW-NB 15 dataset [3]. The results show that attacks do not have the same impact on all classifiers and that the robustness of a classifier depends on the attack and that a trade-off between performance and robustness must be considered depending on the network intrusion detection scenario.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available