4.5 Article

AIDTF: Adversarial training framework for network intrusion detection

Journal

COMPUTERS & SECURITY
Volume 128, Issue -, Pages -

Publisher

ELSEVIER ADVANCED TECHNOLOGY
DOI: 10.1016/j.cose.2023.103141

Keywords

Cyberspace security; Intrusion detection; Adversarial training; Machine learning; Generative adversarial networks

Ask authors/readers for more resources

This paper proposes an Adversarial Intrusion Detection Training Framework (AIDTF) to improve the accuracy and robustness of IDS. It introduces an adversarial training method to obtain an IDS with high accuracy for both known test sets and unknown disguised attack samples.
Network Intrusion Detection Systems (IDS) have achieved high accuracy by widely applying Machine Learning (ML) models. However, most current ML-based IDSs can not cope with targeted attacks from adversaries because they are commonly trained and tested using fixed data sets. In this paper, we propose an Adversarial Intrusion Detection Training Framework (AIDTF) to improve the robustness of IDSs, which consists of an attacker model a-model , a defender model d-model , and a black-box trainer t-module . Both the a-model and d-model are multilayer perceptrons, and the t-module is the module used to train IDSs. AIDTF improves the accuracy of IDS by using an adversarial training method, which is different from traditional training methods. Taking the distribution of normal samples in the dataset as the distribution that the a-model and d-model need to learn, the goal of the a-model is to generate samples that deceive the d-model, while the goal of the d-model is to determine whether the input samples are real samples, so there is an adversarial relationship between a-model and d-model. Different types of IDSs can be trained by the t-module using the samples generated from the confrontation between the a-model and the d-model, and we call this kind of IDS the Adversarial Training Intrusion Detection System (ATIDS). The main contribution of this paper is to propose a training method that is used to obtain an IDS with high accuracy not only for known test sets but also to identify unknown disguised attack samples. We tested different types of ATIDSs using the current mainstream attack methods, which include the Fast Gradient Method, Fast Gradient Sign Method, Projected Gradient Descent, and Jacobs Saliency Map Algorithm. The experimental results prove that AIDTF outperforms other adversarial training methods with not only higher accuracy for the test set but also up to a 99% recognition rate for the attack samples. (c) 2023 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available