4.7 Article

Personalized Federated Learning With Differential Privacy and Convergence Guarantee

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIFS.2023.3293417

关键词

Index Terms- Federated learning; meta-learning; differential privacy; convergence analysis

向作者/读者索取更多资源

Personalized federated learning (PFL) generates personalized models for heterogenous clients and improves convergence with few-shot training. This paper proposes a differential privacy (DP) based PFL (DP-PFL) framework and analyzes its convergence performance. The developed convergence bounds reveal optimal model size and tradeoff among communication rounds, convergence performance, and privacy budget.
Personalized federated learning (PFL), as a novel federated learning (FL) paradigm, is capable of generating personalized models for heterogenous clients. Combined with a meta-learning mechanism, PFL can further improve the convergence performance with few-shot training. However, meta-learning based PFL has two stages of gradient descent in each local training round, therefore posing a more serious challenge in information leakage. In this paper, we propose a differential privacy (DP) based PFL (DP-PFL) framework and analyze its convergence performance. Specifically, we first design a privacy budget allocation scheme for inner and outer update stages based on the Renyi DP composition theory. Then, we develop two convergence bounds for the proposed DP-PFL framework under convex and non-convex loss function assumptions, respectively. Our developed convergence bounds reveal that 1) there is an optimal size of the DP-PFL model that can achieve the best convergence performance for a given privacy level, and 2) there is an optimal tradeoff among the number of communication rounds, convergence performance and privacy budget. Evaluations on various real-life datasets demonstrate that our theoretical results are consistent with experimental results. The derived theoretical results can guide the design of various DP-PFL algorithms with configurable tradeoff requirements on the convergence performance and privacy levels.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据