4.7 Article

Model poisoning attack in differential privacy-based federated learning

Journal

INFORMATION SCIENCES
Volume 630, Issue -, Pages 158-172

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2023.02.025

Keywords

Privacy-preserving; Federated learning; Differential privacy; Model poisoning

Ask authors/readers for more resources

Although federated learning provides privacy protection, studies have shown that shared parameters or gradients may still reveal user privacy. Differential privacy offers a promising solution with low computational overhead. However, it also introduces the risk of model poisoning attacks, where attackers can manipulate the model using noise to reduce convergence speed and cause divergence.
Although federated learning can provide privacy protection for individual raw data, some studies have shown that the shared parameters or gradients under federated learning may still reveal user privacy. Differential privacy is a promising solution to the above problem due to its small computational overhead. At present, differential privacy-based federated learning generally focuses on the trade-off between privacy and model convergence. Even though differential privacy obscures sensitive information by adding a controlled amount of noise to the confidential data, it opens a new door for model poisoning attacks: attackers can use noise to escape anomaly detection. In this paper, we propose a novel model poisoning attack called Model Shuffle Attack (MSA), which designs a unique way to shuffle and scale the model parameters. If we treat the model as a black box, it behaves like a benign model on test set. Unlike other model poisoning attacks, the malicious model after MSA has high accuracy on test set while reducing the global model convergence speed and even causing the model to diverge. Extensive experiments show that under FedAvg and robust aggregation rules, MSA is able to significantly degrade performance of the global model while guaranteeing stealthiness.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available