Journal
ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING
Volume -, Issue -, Pages -Publisher
SPRINGER HEIDELBERG
DOI: 10.1007/s13369-023-08315-5
Keywords
Intrusion detection system (IDS); Adversarial attacks; Denoising auto-encoder; Machine learning; Intrusion datasets
Categories
Ask authors/readers for more resources
With the increase in cyber security attacks, organizations tend to use an intrusion detection system (IDS) based on machine learning. This paper proposes a new defense approach based on denoising auto-encoder (DAE) to protect IDS from adversarial attacks. Experimental results show that the proposed defense mechanism effectively mitigates adversarial attacks.
With the increase in cyber security attacks, organizations tend to use an intrusion detection system (IDS) based on machine learning. Through the years, IDS based on machine learning has shown their effectiveness in protecting one against attacks. Aside from the machine learning nature being a black-box, there is a possibility of adversaries that can mess up the classification model. Using machine learning in critical aspects such as the medical field and intrusion detection system can result in disastrous impacts on organizations if it is vulnerable to adversary attacks. This paper proposes a new defense approach based on denoising auto-encoder (DAE) to protect IDS from adversarial attacks. To verify the efficacy of the proposed defense mechanism in mitigating adversarial attacks, two datasets were used. The experimental results show that the proposed defense mechanism proves validity against four white-box attacks and one black-box attack. The system's accuracy under adversarial attack elevates from around 68% to 90% and 97% under normal conditions on the first dataset. Similarly, on the second dataset, the models' accuracy increases from 64 to 85% under normal conditions and adversarial attacks.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available