4.7 Article

Model poisoning attack in differential privacy-based federated learning

期刊

INFORMATION SCIENCES
卷 630, 期 -, 页码 158-172

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2023.02.025

关键词

Privacy-preserving; Federated learning; Differential privacy; Model poisoning

向作者/读者索取更多资源

Although federated learning provides privacy protection, studies have shown that shared parameters or gradients may still reveal user privacy. Differential privacy offers a promising solution with low computational overhead. However, it also introduces the risk of model poisoning attacks, where attackers can manipulate the model using noise to reduce convergence speed and cause divergence.
Although federated learning can provide privacy protection for individual raw data, some studies have shown that the shared parameters or gradients under federated learning may still reveal user privacy. Differential privacy is a promising solution to the above problem due to its small computational overhead. At present, differential privacy-based federated learning generally focuses on the trade-off between privacy and model convergence. Even though differential privacy obscures sensitive information by adding a controlled amount of noise to the confidential data, it opens a new door for model poisoning attacks: attackers can use noise to escape anomaly detection. In this paper, we propose a novel model poisoning attack called Model Shuffle Attack (MSA), which designs a unique way to shuffle and scale the model parameters. If we treat the model as a black box, it behaves like a benign model on test set. Unlike other model poisoning attacks, the malicious model after MSA has high accuracy on test set while reducing the global model convergence speed and even causing the model to diverge. Extensive experiments show that under FedAvg and robust aggregation rules, MSA is able to significantly degrade performance of the global model while guaranteeing stealthiness.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据