4.8 Article

Privacy Threat and Defense for Federated Learning With Non-i.i.d. Data in AIoT

期刊

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS
卷 18, 期 2, 页码 1310-1321

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TII.2021.3073925

关键词

Data privacy; Privacy; Data models; Analytical models; Upper bound; Servers; Training; Artificial intelligence of things (AIoT); convergence analysis; differential privacy (DP); federated learning (FL); privacy protection

资金

  1. National Science Foundation of U.S. [1741277, 1829674, 1704287, 1912753, 2011845]
  2. Microsoft Investigator Fellowship [TII-21-0449]
  3. Div Of Electrical, Commun & Cyber Sys
  4. Directorate For Engineering [2011845] Funding Source: National Science Foundation

向作者/读者索取更多资源

This article explores innovative approaches to privacy protection in federated learning with non-i.i.d. data. A novel algorithm, 2DP-FL, is designed to achieve differential privacy by adding noise during training local models and when distributing global model. The results of theoretical analysis and real-data experiments validate the advantages of 2DP-FL in privacy protection, learning convergence, and model accuracy.
Under the needs of processing huge amounts of data, providing high-quality service, and protecting user privacy in artificial intelligence of things (AIoT), federated learning (FL) has been treated as a promising technique to facilitate distributed learning with privacy protection. Although the importance of developing privacy-preserving FL has attracted a lot of attentions, the existing research only focuses on FL with independent identically distributed (i.i.d.) data and lacks study of non-i.i.d. scenario. What is worse, the assumption of i.i.d. data is impractical, reducing the performance of privacy protection in real applications. In this article, we carry out an innovative exploration of privacy protection in FL with non-i.i.d. data. First, a thorough analysis on privacy leakage in FL is conducted with proving the performance upper bound of privacy inference attack. Based on our analysis, a novel algorithm, 2DP-FL, is designed to achieve differential privacy by adding noise during training local models and when distributing global model. Especially, our 2DP-FL algorithm has a flexibility of noise addition to meet various needs and has a convergence upper bound. Finally, the real-data experiments can validate the results of our the oretical analysis and the advantages of 2DP-FL in privacy protection, learning convergence, and model accuracy.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据