4.5 Article

Preserving data privacy in federated learning through large gradient pruning

Journal

COMPUTERS & SECURITY
Volume 125, Issue -, Pages -

Publisher

ELSEVIER ADVANCED TECHNOLOGY
DOI: 10.1016/j.cose.2022.103039

Keywords

Privacy in federated learning; Image reconstruction; Gradient pruning

Ask authors/readers for more resources

In federated learning, a global learning model is trained by the server using gradient information shared by multiple clients to protect client data privacy. However, it has been shown that training data can be reconstructed from shared gradients, leading to privacy breaches. This paper proposes two pruning-based defense mechanisms to prevent privacy leaks in the image reconstruction process and demonstrates their effectiveness on various model architectures and datasets.
In federated learning, the server trains a global learning model based on gradient information shared by multiple clients; thus, protecting client data privacy. However, it has been shown that training data can be reconstructed from the shared gradients, which can result in serious privacy breaches (e.g., also known as gradient-based inversion attacks). Popular privacy-preserving methods include those that are perturbation-related, such as differential privacy. However, these methods can result in high utility loss. In this paper, we reveal that large magnitude gradients play an important role in the image reconstruct-ing process, and thus propose two pruning based defense mechanisms (i.e., SLGP and RLGP) for differ-ent model architectures. As only very few gradients have been affected, the utility can be maintained. To demonstrate efficiency, We evaluate the impact of our mechanisms on preventing the reconstruction of input images on various model architectures and datasets using state-of-the-art attack methods. The reconstructed images obtained using the gradient processed by our method are unrecognizable while maintaining the original performance of the models. (c) 2022 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available