4.5 Article

Shield Against Gradient Leakage Attacks: Adaptive Privacy-Preserving Federated Learning

Journal

IEEE-ACM TRANSACTIONS ON NETWORKING
Volume -, Issue -, Pages -

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNET.2023.3317870

Keywords

Federated learning; data privacy; gradient leakage attack; differential privacy

Ask authors/readers for more resources

This paper proposes an Adaptive Privacy-Preserving Federated Learning (Adp-PPFL) framework that provides satisfactory privacy protection against gradient leakage attacks. It introduces a leakage risk-aware privacy decomposition mechanism and an adaptive privacy-preserving local training mechanism to improve model accuracy and convergence speed.
Federated learning (FL) requires frequent uploading and updating of model parameters, which is naturally vulnerable to gradient leakage attacks (GLAs) that reconstruct private training data through gradients. Although some works incorporate differential privacy (DP) into FL to mitigate such privacy issues, their performance is not satisfactory since they did not notice that GLA incurs heterogeneous risks of privacy leakage (RoPL) with respect to gradients from different communication rounds and clients. In this paper, we propose an Adaptive Privacy-Preserving Federated Learning (Adp-PPFL) framework to achieve satisfactory privacy protection against GLA, while ensuring good performance in terms of model accuracy and convergence speed. Specifically, a leakage risk-aware privacy decomposition mechanism is proposed to provide adaptive privacy protection to different communication rounds and clients by dynamically allocating the privacy budget according to the quantified RoPL. In particular, we exploratively design a round-level and a client-level RoPL quantification method to measure the possible risks of GLA breaking privacy from gradients in different communication rounds and clients respectively, which only employ the limited information in general FL settings. Furthermore, to improve the FL model training performance (i.e., convergence speed and global model accuracy), we propose an adaptive privacy-preserving local training mechanism that dynamically clips the gradients and decays the noises added to the clipped gradients during the local training process. Extensive experiments show that our framework outperforms the existing differentially private FL schemes on model accuracy, convergence, and attack resistance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available