4.6 Article

Distributed additive encryption and quantization for privacy preserving federated deep learning

期刊

NEUROCOMPUTING
卷 463, 期 -, 页码 309-327

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2021.08.062

关键词

Federated learning; Deep learning; Homomorphic encryption; Distributed key generation; Quantization

向作者/读者索取更多资源

The study introduces an improved encryption-based protocol for federated learning, which collaboratively generates key pairs and utilizes threshold key sharing technique to avoid the computational burden of encrypting and decrypting the entire model. This approach significantly reduces communication costs and computational complexity without compromising performance and security.
Homomorphic encryption is a very useful gradient protection technique used in privacy preserving federated learning. However, existing encrypted federated learning systems need a trusted third party to generate and distribute key pairs to connected participants, making them unsuited for federated learning and vulnerable to security risks. Moreover, encrypting all model parameters is computationally intensive, especially for large machine learning models such as deep neural networks. In order to mitigate these issues, we develop a practical, computationally efficient encryption based protocol for federated deep learning, where the key pairs are collaboratively generated without the help of a trusted third party. By quantization of the model parameters on the clients and an approximated aggregation on the server, the proposed method avoids encryption and decryption of the entire model. In addition, a threshold based secret sharing technique is designed so that no one can hold the global private key for decryption, while aggregated ciphertexts can be successfully decrypted by a threshold number of clients even if some clients are offline. Our experimental results confirm that the proposed method significantly reduces the communication costs and computational complexity compared to existing encrypted federated learning without compromising the performance and security. (c) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据