4.6 Article

Privacy-Preserving Federated Deep Learning With Irregular Users

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TDSC.2020.3005909

关键词

Training; Servers; Deep learning; Privacy; Cryptography; Neural networks; Privacy protection; federated learning; cloud computing

资金

  1. National Key R&D Program of China [2017YFB0802300, 2017YFB0802000]
  2. National Natural Science Foundation of China [61972454, 61802051, 61772121, 6197094, 61728102]
  3. Sichuan Science and Technology Program [2020JDTD0007, 2020YFG0298]
  4. Peng Cheng Laboratory Project of Guangdong Province [PCL2018KP004]
  5. Guangxi Key Laboratory of Cryptography, and Information Security [GCIS201804]

向作者/读者索取更多资源

This article introduces a privacy-preserving federated deep learning framework, PPFDL, which aims to address the negative impact of irregular users on training accuracy and ensure the confidentiality of user-related information. Extensive experiments demonstrate the superior performance of PPFDL in terms of training accuracy, computation, and communication overheads.
Federated deep learning has been widely used in various fields. To protect data privacy, many privacy-preservingapproaches have been designed and implemented in various scenarios. However, existing works rarely consider a fundamental issue that the data shared by certain users (called irregular users) may be of low quality. Obviously, in a federated training process, data shared by many irregular users may impair the training accuracy, or worse, lead to the uselessness of the final model. In this article, we propose PPFDL, a Privacy-Preserving Federated Deep Learning framework with irregular users. In specific, we design a novel solution to reduce the negative impact of irregular users on the training accuracy, which guarantees that the training results are mainly calculated from the contribution of high-quality data. Meanwhile, we exploit Yao's garbled circuits and additively homomorphic cryptosystems to ensure the confidentiality of all user-related information. Moreover, PPFDL is also robust to users dropping out during the whole implementation. This means that each user can be offline at any subprocess of training, as long as the remaining online users can still complete the training task. Extensive experiments demonstrate the superior performance of PPFDL in terms of training accuracy, computation, and communication overheads.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据