Journal
COMPUTERS & SECURITY
Volume 135, Issue -, Pages -Publisher
ELSEVIER ADVANCED TECHNOLOGY
DOI: 10.1016/j.cose.2023.103483
Keywords
Intrusion detection; Adversarial attack; Autoencoder; InSDN; CICIDS 2017; Adversarial robustness
Categories
Ask authors/readers for more resources
This study proposes a method that uses generative adversarial networks to generate adversarial attack data, and designs a robust IDS model to enhance resistance against adversarial attacks. By training machine learning classifiers with multiple feature sets and autoencoders, the proposed model achieves higher accuracy and F1-score.
Machine learning-based intrusion detection systems (IDS) are essential security functions in conventional and software-defined networks alike. Their success and the security of the networks they protect depend on the accuracy of their classification results. Adversarial attacks against machine learning, which seriously threaten any IDS, are still not countered effectively. In this study, we first develop a method that employs generative adversarial networks to produce adversarial attack data. Then, we propose RAIDS, a robust IDS model, designed to be resilient against adversarial attacks. In RAIDS, an autoencoder's reconstruction error is used as a prediction value for a classifier. Also, to prevent the attacker from guessing about the feature set, multiple feature sets are created and used to train baseline machine learning classifiers. A LightGBM classifier is then trained with the results produced by two autoencoders and an ensemble of baseline machine learning classifiers. The results show that the proposed robust model can increase overall accuracy by at least 13.2% and F1-score by more than 110% against adversarial attacks without the need for adversarial training.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available