4.5 Article

Two-phase Defense Against Poisoning Attacks on Federated Learning-based Intrusion Detection

Journal

COMPUTERS & SECURITY
Volume 129, Issue -, Pages -

Publisher

ELSEVIER ADVANCED TECHNOLOGY
DOI: 10.1016/j.cose.2023.103205

Keywords

Federated Learning; Intrusion Detection; Poisoning Attack; Backdoor Attack; Local Outlier Factor

Ask authors/readers for more resources

The Machine Learning-based Intrusion Detection System (ML-IDS) is popular but has data privacy issues, so the Federated Learning-based IDS (FL-IDS) was proposed. The FL-IDS uses a two-phase defense mechanism called DPA-FL to defend against poisoning attacks in intrusion detection. Experimental results show that DPA-FL achieves 96.5% accuracy in defending against poisoning attacks.
The Machine Learning-based Intrusion Detection System (ML-IDS) becomes more popular because it doesn't need to manually update the rules and can recognize variants better, However, due to the data privacy issue in ML-IDS, the Federated Learning-based IDS (FL-IDS) was proposed. In each round of federated learning, each participant first trains its local model and sends the model's weights to the global server, which then aggregates the received weights and distributes the aggregated global model to participants. An attacker will use poisoning attacks, including label-flipping attacks and backdoor attacks, to directly generate a malicious local model and indirectly pollute the global model. Currently, a few studies defend against poisoning attacks, but they only discuss label-flipping attacks in the image field. Therefore, we propose a two-phase defense mechanism, called Defending Poisoning Attacks in Federated Learning (DPA-FL), applied to intrusion detection. The first phase employs relative differences to quickly compare weights between participants because the local models of attackers and benign participants are quite different. The second phase tests the aggregated model with the dataset and tries to find the attackers when its accuracy is low. Experiment results show that DPA-FL can reach 96.5% accuracy in defending against poisoning attacks. Compared with other defense mechanisms, DPA-FL can improve F1-score by 20 similar to 64% under backdoor attacks. Also, DPA-FL can exclude the attackers within twelve rounds when the attackers are few.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available