4.6 Article

Aggregation Service for Federated Learning: An Efficient, Secure, and More Resilient Realization

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TDSC.2022.3146448

关键词

Federated learning; secure aggregation; privacy; quantization; computation integrity

向作者/读者索取更多资源

Federated learning is a new paradigm that utilizes diverse data sources to train high quality models without sharing the training datasets. However, sharing model updates in federated learning still poses privacy risks. In this paper, we propose a system design that protects individual model updates efficiently, allowing clients to provide obscured updates while a cloud server performs aggregation. We also explore bandwidth efficiency optimization and security mechanisms against an adversarial cloud server. Experiments on benchmark datasets show that our system achieves comparable accuracy to the plaintext baseline with practical performance.
Federated learning has recently emerged as a paradigm promising the benefits of harnessing rich data from diverse sources to train high quality models, with the salient features that training datasets never leave local devices. Only model updates are locally computed and shared for aggregation to produce a global model. While federated learning greatly alleviates the privacy concerns as opposed to learning with centralized data, sharing model updates still poses privacy risks. In this paper, we present a system design which offers efficient protection of individual model updates throughout the learning procedure, allowing clients to only provide obscured model updates while a cloud server can still perform the aggregation. Our federated learning system first departs from prior works by supporting lightweight encryption and aggregation, and resilience against drop-out clients with no impact on their participation in future rounds. Meanwhile, prior work largely overlooks bandwidth efficiency optimization in the ciphertext domain and the support of security against an actively adversarial cloud server, which we also fully explore in this paper and provide effective and efficient mechanisms. Extensive experiments over several benchmark datasets (MNIST, CIFAR-10, and CelebA) show our system achieves accuracy comparable to the plaintext baseline, with practical performance.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据