4.7 Article

Pain-FL: Personalized Privacy-Preserving Incentive for Federated Learning

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSAC.2021.3118354

关键词

Privacy; Biological system modeling; Servers; Costs; Contracts; Computational modeling; Data models; Federated learning; differential privacy; incentive mechanism; contracts

资金

  1. National Natural Science Foundation of China [62102337, 62122066, U20A20182, 61872274]
  2. National Key Research and Development Program of China [2020AAA0107705]

向作者/读者索取更多资源

This paper proposes a personalized privacy-preserving incentive mechanism called Pain-FL for FL, which provides customized payments for workers with different privacy preferences to compensate for privacy leakage costs while ensuring satisfactory convergence performance of FL models. Each worker agrees on a customized contract with the server, specifying a privacy-preserving level and payment, and perturbs gradients to be uploaded with that level in exchange for payment. Analytically derived optimal contracts under complete and incomplete information models optimize convergence performance of FL models while maintaining economic properties. Experimentally evaluated, Pain-FL demonstrates practicality and effectiveness.
Federated learning (FL) is a privacy-preserving distributed machine learning framework, which involves training statistical models over a number of mobile users (i.e., workers) while keeping data localized. However, recent works have demonstrated that workers engaged in FL are still susceptible to advanced inference attacks when sharing model updates or gradients, which would discourage them from participating. Most of the existing incentive mechanisms for FL mainly account for workers' resource cost, while the cost incurred by potential privacy leakage resulting from inference attacks has rarely been incorporated. To address these issues, in this paper, we propose a contract-based personalized privacy-preserving incentive for FL, named Pain-FL, to provide customized payments for workers with different privacy preferences as compensation for privacy leakage cost while ensuring satisfactory convergence performance of FL models. The core idea of Pain-FL is that each worker agrees on a customized contract, which specifies a kind of privacy-preserving level (PPL) and the corresponding payment, with the server in each round of FL. Then, the worker perturbs her calculated stochastic gradients to be uploaded with that PPL in exchange for that payment. In particular, we respectively derive a set of optimal contracts analytically under both complete and incomplete information models, which could optimize the convergence performance of the finally learned global model, while bearing some desired economic properties, i.e., budget feasibility, individual rationality, and incentive compatibility. An exhaustive experimental evaluation of Pain-FL is conducted, and the results corroborate its practicability and effectiveness.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据