4.7 Article

SecFedNIDS: Robust defense for poisoning attack against federated learning-based network intrusion detection system

Publisher

ELSEVIER
DOI: 10.1016/j.future.2022.04.010

Keywords

Network intrusion detection; Federated learning; Poisoning attacks; Defensive mechanism; Poisoned model detection; Poisoned data detection

Funding

  1. National Natural Science Foundation of China [61971057]

Ask authors/readers for more resources

This study introduces a secure FL-based NIDS called SecFedNIDS, which demonstrates strong robustness against poisoning attacks. By implementing model-level and data-level defense mechanisms, the accuracy of the intrusion detection model has been significantly improved.
Federated learning-based network intrusion detection system (FL-based NIDS) has demonstrated tremendous potential in protecting the security of IoT network. It enables learning an effective intrusion detection model from massive traffic data collaboratively without data privacy leakage. However, FL-based NIDS has exhibited inherent vulnerabilities on the poisoning attacks launched by malicious clients. The poisoning attacks aim to corrupt the intrusion detection model and impair its protection capability, by injecting the poisoned traffic data into the local training dataset. We build a secure FL-based NIDS that is robust for the poisoning attacks, namely SecFedNIDS. Firstly, we propose the model-level defensive mechanism based on poisoned model detection. Specifically, we propose the gradient-based important model parameter selection method to provide the effective low-dimensional representations of the uploaded local model parameters, and then we propose the online unsupervised poisoned model detection method to identify the poisoned models and reject them to join in the global intrusion detection model. Subsequently, we design the data-level defensive mechanism based on poisoned data detection. Notably, we propose a novel poisoned data detection method based on class path similarity, to filter out the poisoned traffic data and avoid them participating in subsequent local training. We adopt layer-wise relevance propagation to extract the class path of clean traffic data, and transmit the class paths to the poisoned clients to help distinguish the poisoned traffic data. Results show that SecFedNIDS with the proposed model-level defense boosts the accuracy by up to 48% under the poisoning attacks on UNSW-NB15 dataset and 36% on CICIDS2018 dataset, and the proposed data-level defense further improves its accuracy by up to 13% on CICIDS2018 dataset. (c) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available