4.5 Article

Evaluating Standard Feature Sets Towards Increased Generalisability and Explainability of ML-Based Network Intrusion Detection

Journal

BIG DATA RESEARCH
Volume 30, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.bdr.2022.100359

Keywords

CICFlowMeter; Explainable; Machine learning; NetFlow; Network intrusion detection system; SHaP

Ask authors/readers for more resources

Machine Learning-based network intrusion detection systems have significant benefits for enhancing cybersecurity, but there are challenges in their development and evaluation, such as limited evaluation of ML models and lack of understanding of internal ML operations. This paper overcomes these challenges by evaluating the generalisability of a common feature set and using explainable AI methods to interpret ML models' classification decisions.
Machine Learning (ML)-based network intrusion detection systems bring many benefits for enhancing the cybersecurity posture of an organisation. Many systems have been designed and developed in the research community, often achieving a close to perfect detection rate when evaluated using synthetic datasets. However, there are ongoing challenges with the development and evaluation of ML-based NIDSs; the limited ability of comprehensive evaluation of ML models and lack of understanding of internal ML operations. This paper overcomes the challenges by evaluating and explaining the generalisability of a common feature set to different network environments and attack scenarios. Two feature sets (NetFlow and CICFlowMeter) have been evaluated in terms of detection accuracy across three key datasets, i.e., CSE-CIC-IDS2018, BoT-IoT, and ToN-IoT. The results show the superiority of the NetFlow feature set in enhancing the ML model's detection accuracy of various network attacks. In addition, due to the complexity of the learning models, SHapley Additive exPlanations (SHAP), an explainable AI methodology, has been adopted to explain and interpret the achieved classification decisions of ML models. The Shapley values of two common feature sets have been analysed across multiple datasets to determine the influence contributed by each feature towards the final ML prediction.(c) 2022 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available