4.5 Article

The Cross-Evaluation of Machine Learning-Based Network Intrusion Detection Systems

Journal

IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT
Volume 19, Issue 4, Pages 5152-5169

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNSM.2022.3157344

Keywords

Network intrusion detection; Training; Reliability; Machine learning; Labeling; Proposals; Monitoring; intrusion detection systems; network security; evaluation

Funding

  1. European Commission [832735]

Ask authors/readers for more resources

Enhancing Network Intrusion Detection Systems (NIDS) with supervised Machine Learning (ML) is difficult due to the need for labeled data. We propose using existing labeled data for cross-evaluations of ML-NIDS to discover unknown qualities. We introduce the first cross-evaluation model and framework, demonstrating the potential and risks of cross-evaluations.
Enhancing Network Intrusion Detection Systems (NIDS) with supervised Machine Learning (ML) is tough. ML-NIDS must be trained and evaluated, operations requiring data where benign and malicious samples are clearly labeled. Such labels demand costly expert knowledge, resulting in a lack of real deployments, as well as on papers always relying on the same outdated data. The situation improved recently, as some efforts disclosed their labeled datasets. However, most past works used such datasets just as a 'yet another' testbed, overlooking the added potential provided by such availability.In contrast, we promote using such existing labeled data to cross-evaluate ML-NIDS. Such approach received only limited attention and, due to its complexity, requires a dedicated treatment. We hence propose the first cross-evaluation model. Our model highlights the broader range of realistic use-cases that can be assessed via cross-evaluations, allowing the discovery of still unknown qualities of state-of-the-art ML-NIDS. For instance, their detection surface can be extended-at no additional labeling cost. However, conducting such cross-evaluations is challenging. Hence, we propose the first framework, XeNIDS, for reliable cross-evaluations based on Network Flows. By using XeNIDS on six well-known datasets, we demonstrate the concealed potential, but also the risks, of cross-evaluations of ML-NIDS.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available