4.6 Article

GFL-ALDPA: a gradient compression federated learning framework based on adaptive local differential privacy budget allocation

期刊

出版社

SPRINGER
DOI: 10.1007/s11042-023-16543

关键词

Federated learning; Differential privacy; Privacy-preserving; Gradient compression; Privacy budget allocation

向作者/读者索取更多资源

This paper proposes a gradient compression federated learning framework based on adaptive local differential privacy budget allocation (GFL-ALDPA) to reduce the loss of privacy budget and the amount of model noise, and improve model accuracy. By assigning different privacy budgets to different communication rounds during training, it maximizes the limited privacy budget and achieves a better trade-off between privacy preservation, communication efficiency, and model accuracy.
Federated learning(FL) is a popular distributed machine learning framework which can protect users' private data from being exposed to adversaries. However, related work shows that sensitive private information can still be compromised by analyzing parameters uploaded by clients. Applying differential privacy to federated learning has been a popular privacy-preserving way to achieve strict privacy guarantees in recent years. To reduce the impact of noise, this paper proposes to apply local differential privacy(LDP) to federated learning. We propose a gradient compression federated learning framework based on adaptive local differential privacy budget allocation(GFL-ALDPA). We propose a novel adaptive privacy budget allocation scheme based on communication rounds to reduce the loss of privacy budget and the amount of model noise. It can maximize the limited privacy budget and improve the model accuracy by assigning different privacy budgets to different communication rounds during training. Furthermore, we also propose a gradient compression mechanism based on dimension reduction, which can reduce the communication cost, overall noise size, and loss of the total privacy budget of the model simultaneously to ensure accuracy under a specific privacy-preserving guarantee. Finally, this paper presents the experimental evaluation on the MINIST dataset. Theoretical analysis and experiments demonstrate that our framework can achieve a better trade-off between privacy preservation, communication efficiency, and model accuracy.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据