期刊
DISCOVERY SCIENCE (DS 2021)
卷 12986, 期 -, 页码 385-400出版社
SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-030-88942-5_30
关键词
Cyber-security; Network intrusion detection; Deep learning; Explainability; Grad-CAM
资金
- MUR [ARS01 01116]
- project Modelli e tecniche di data science per la analisi di dati strutturati - University of Bari Aldo Moro
This paper proposes a method to make the visual explanations of deep learning-based intrusion detection models more transparent and accurate, addressing issues related to network cyber attacks. The method demonstrates effectiveness on a CNN trained on a 2D representation of historical network traffic data.
As network cyber attacks continue to evolve, traditional intrusion detection systems are no longer able to detect new attacks with unexpected patterns. Deep learning is currently addressing this problem by enabling unprecedented breakthroughs to properly detect unexpected network cyber attacks. However, the lack of decomposability of deep neural networks into intuitive and understandable components makes deep learning decisions difficult to interpret. In this paper, we propose a method for leveraging the visual explanations of deep learning-based intrusion detection models by making them more transparent and accurate. In particular, we consider a CNN trained on a 2D representation of historical network traffic data to distinguish between attack and normal flows. Then, we use the Grad-CAM method to produce coarse localization maps that highlight the most important regions of the traffic data representation to predict the cyber attack. Since decisions made on samples belonging to the same class are expected to be explained with similar localization maps, we base the final classification of a new network flow on the class of the nearest-neighbour historical localization map. Experiments with various benchmark datasets demonstrate the effectiveness of the proposed method compared to several state-of-the-art methods.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据