Journal
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING
Volume 20, Issue 4, Pages 3328-3340Publisher
IEEE COMPUTER SOC
DOI: 10.1109/TDSC.2022.3196646
Keywords
Federated learning; property inference attack; privacy protection
Ask authors/readers for more resources
Federated learning is a privacy-preserving learning technique for training a global model collaboratively while preserving local data privacy. However, recent research has shown that federated learning is vulnerable to inference attacks, particularly property inference attacks that can lead to severe privacy leakage. Existing property inference attack approaches do not perform well when the global model has converged or in dynamic federated learning scenarios. In this paper, a novel poisoning-assisted property inference attack (PAPI-attack) is proposed, which leverages the discriminative ability in periodic model updates to infer unintended information. A property-specific poisoning mechanism is presented to distort the decision boundary of the shared global model, inducing benign participants to disclose more sensitive property information. Extensive experiments demonstrate that PAPI-attack outperforms existing property inference attacks against federated learning.
Federated learning (FL) has emerged as an ideal privacy-preserving learning technique which can train a global model in a collaborative way while preserving the private data in the local. However, recent advances have demonstrated that FL is still vulnerable to inference attacks, such as reconstruction attack and membership inference. Among these attacks, the property inference attack, aiming to infer properties of the training data that are irrelevant with the learning objective, has not received too much attention while resulting in severe privacy leakage. Existing property inference attack approaches either cannot achieve satisfactory performance when the global model has converged or under dynamic FL where participants can drop in and drop out freely. In this paper, we propose a novel poisoning-assisted property inference attack (PAPI-attack) against FL. The key insight is that there exists underlying discriminative ability in the periodic model updates, which reflects the change of the data distribution, especially the occurrence of the sensitive property. Thus, a binary attack model can be constructed by a malicious participant for inferring the unintended information. More importantly, we present a property-specific poisoning mechanism by modifying the label of training data from the adversary to distort the decision boundary of shared (global) model in FL. Consequently, benign participants are induced to disclose more information about the sensitive property. Extensive experiments on real-world datasets demonstrate that PAPI-attack outperforms the state-of-the-art property inference attacks against FL.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available