4.7 Article

A Privacy-Preserving Federated Learning for Multiparty Data Sharing in Social IoTs

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TNSE.2021.3074185

关键词

Privacy; Encryption; Differential privacy; Training; Data privacy; Servers; Deep learning; Multiparty data sharing; federated learning; privacy-preserving; functional encryption; local differential privacy

资金

  1. National Key R&D Program of China [2018YFB2100400]
  2. National Natural Science Foundation of China [62002077, 61872100]
  3. China Postdoctoral Science Foundation [2020M682657]
  4. Guangdong Basic and Applied Basic Research Foundation [2020A1515110385]
  5. Zhejiang Lab [2020NF0AB01]

向作者/读者索取更多资源

The paper introduces a new hybrid privacy-preserving method for addressing data leakage threats in existing federated learning training processes. It utilizes advanced functional encryption algorithms and local Bayesian differential privacy to enhance data protection, while also implementing Sparse Differential Gradient to improve transmission and storage efficiency.
As 5G and mobile computing are growing rapidly, deep learning services in the Social Computing and Social Internet of Things (IoT) have enriched our lives over the past few years. Mobile devices and IoT devices with computing capabilities can join social computing anytime and anywhere. Federated learning allows for the full use of decentralized training devices without the need for raw data, providing convenience in breaking data silos further and delivering more precise services. However, the various attacks illustrate that the current training process of federal learning is still threatened by disclosures at both the data and content levels. In this paper, we propose a new hybrid privacy-preserving method for federal learning to meet the challenges above. First, we employ an advanced function encryption algorithm that not only protects the characteristics of the data uploaded by each client, but also protects the weight of each participant in the weighted summation procedure. By designing local Bayesian differential privacy, the noise mechanism can effectively improve the adaptability of different distributed data sets. In addition, we also use Sparse Differential Gradient to improve the transmission and storage efficiency in federal learning training. Experiments show that when we use the sparse differential gradient to improve the transmission efficiency, the accuracy of the model is only dropped by 3% at most.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据