4.4 Article

Constrained optimization based adversarial example generation for transfer attacks in network intrusion detection systems

Journal

OPTIMIZATION LETTERS
Volume -, Issue -, Pages -

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s11590-023-02007-7

Keywords

Cyber security; Network intrusion detection; Adversarial machine learning; Constrained optimization; Meta-heuristic

Ask authors/readers for more resources

Deep learning achieves high intrusion detection rates without feature engineering. However, existing adversarial machine learning methods fail to work well in the constrained cyber domain due to the production of non-functional network packets. This research develops a meta-heuristic based generative model to generate adversarial examples that maximize the classification loss of packet payloads, and shows that NIDS classifiers are vulnerable to adversarial attacks in the test network intrusion detection system classifiers.
Deep learning has enabled network intrusion detection rates as high as 99.9% for malicious network packets without requiring feature engineering. Adversarial machine learning methods have been used to evade classifiers in the computer vision domain; however, existing methods do not translate well into the constrained cyber domain as they tend to produce non-functional network packets. This research views the payload of network packets as code with many functional units. A meta-heuristic based generative model is developed to maximize classification loss of packet payloads with respect to a surrogate model by repeatedly substituting units of code with functionally equivalent counterparts. The perturbed packets are then transferred and tested against three test network intrusion detection system classifiers with various evasion rates that depend on the classifier and malicious packet type. If the test classifier is of the same architecture as the surrogate model, near-optimal adversarial examples penetrate the test model for 69% of packets whereas the raw examples succeeds for only 5% of packets. This confirms hypotheses that NIDS classifiers are vulnerable to adversarial attacks, motivating research in robust learning for cyber.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available