Journal
HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES
Volume 11, Issue -, Pages -Publisher
KOREA INFORMATION PROCESSING SOC
DOI: 10.22967/HCIS.2021.11.035
Keywords
Adversarial Attacks; Explainable AI; Intrusion Detection Systems; Machine Learning
Categories
Funding
- Energy Cloud R&D Program through the National Research Foundation of Korea (NRF), - Ministry of Science and ICT [2019M3F2A1073386]
- National Research Foundation of Korea [2019M3F2A1073386] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)
Ask authors/readers for more resources
This paper proposes an adversarial attack detection framework in machine learning-based intrusion detection systems, which detects adversarial attacks by explaining normal data records.
With the tremendous increase in networking devices connected to the Internet, network security is recognized as an important issue. Intrusion detection systems (IDSs)are one of the important components of network security. There are several methods for implementing an IDS, and one is machine learning. The machine learning performance of IDSs is evolving to a very large extent and is being used in real IDSs. However, recent studies showed that machine learning classification models are vulnerable to adversarial attacks. In this paper, we propose an adversarial attack detection framework in machine learning-based explainable AI intrusion detection systems. The proposed framework consists of two phases:initialization and detection. In the initialization phase, we train an IDS based on a support vector machine classification model and extract explanations of the Normaldata records from the dataset using LIME (local interpretable model-agnostic explanations). Based on the resulting explanations, results of the classification by the trained IDS are analyzed during the detection phase by explanation to detect an adversarial attack. We evaluate the proposed method using the NSL-KDD dataset.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available