3.8 Proceedings Paper

PFA: Privacy-preserving Federated Adaptation for Effective Model Personalization

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3442381.3449847

关键词

Decentralized AI; Federated Learning; Neural Networks; Personalization; Privacy

资金

  1. National Key Research and Development Program [2016YFB1000105]
  2. National Natural Science Foundation of China [61772042]

向作者/读者索取更多资源

This paper introduces a new concept called federated adaptation to achieve better personalization results by adjusting the trained model in a federated manner. The proposed PFA framework leverages the sparsity property of neural networks to generate privacy-preserving representations and efficiently identify clients with similar data distributions, ultimately outperforming other state-of-the-art methods while ensuring user privacy.
Federated learning (FL) has become a prevalent distributed machine learning paradigm with improved privacy. After learning, the resulting federated model should be further personalized to each different client. While several methods have been proposed to achieve personalization, they are typically limited to a single local device, which may incur bias or overfitting since data in a single device is extremely limited. In this paper, we attempt to realize personalization beyond a single client. The motivation is that during the FL process, there may exist many clients with similar data distribution, and thus the personalization performance could be significantly boosted if these similar clients can cooperate with each other. Inspired by this, this paper introduces a new concept called federated adaptation, targeting at adapting the trained model in a federated manner to achieve better personalization results. However, the key challenge for federated adaptation is that we could not outsource any raw data from the client during adaptation, due to privacy concerns. In this paper, we propose PFA, a framework to accomplish Privacy-preserving Federated Adaptation. PFA leverages the sparsity property of neural networks to generate privacy-preserving representations and uses them to efficiently identify clients with similar data distributions. Based on the grouping results, PFA conducts an FL process in a group-wise way on the federated model to accomplish the adaptation. For evaluation, we manually construct several practical FL datasets based on public datasets in order to simulate both the class-imbalance and background-difference conditions. Extensive experiments on these datasets and popular model architectures demonstrate the effectiveness of PFA, outperforming other state-of-the-art methods by a large margin while ensuring user privacy. We will release our code at: https://github.com/lebyni/PFA.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据