4.7 Article

A Privacy-Preserving Federated Learning for Multiparty Data Sharing in Social IoTs

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TNSE.2021.3074185

Keywords

Privacy; Encryption; Differential privacy; Training; Data privacy; Servers; Deep learning; Multiparty data sharing; federated learning; privacy-preserving; functional encryption; local differential privacy

Funding

  1. National Key R&D Program of China [2018YFB2100400]
  2. National Natural Science Foundation of China [62002077, 61872100]
  3. China Postdoctoral Science Foundation [2020M682657]
  4. Guangdong Basic and Applied Basic Research Foundation [2020A1515110385]
  5. Zhejiang Lab [2020NF0AB01]

Ask authors/readers for more resources

The paper introduces a new hybrid privacy-preserving method for addressing data leakage threats in existing federated learning training processes. It utilizes advanced functional encryption algorithms and local Bayesian differential privacy to enhance data protection, while also implementing Sparse Differential Gradient to improve transmission and storage efficiency.
As 5G and mobile computing are growing rapidly, deep learning services in the Social Computing and Social Internet of Things (IoT) have enriched our lives over the past few years. Mobile devices and IoT devices with computing capabilities can join social computing anytime and anywhere. Federated learning allows for the full use of decentralized training devices without the need for raw data, providing convenience in breaking data silos further and delivering more precise services. However, the various attacks illustrate that the current training process of federal learning is still threatened by disclosures at both the data and content levels. In this paper, we propose a new hybrid privacy-preserving method for federal learning to meet the challenges above. First, we employ an advanced function encryption algorithm that not only protects the characteristics of the data uploaded by each client, but also protects the weight of each participant in the weighted summation procedure. By designing local Bayesian differential privacy, the noise mechanism can effectively improve the adaptability of different distributed data sets. In addition, we also use Sparse Differential Gradient to improve the transmission and storage efficiency in federal learning training. Experiments show that when we use the sparse differential gradient to improve the transmission efficiency, the accuracy of the model is only dropped by 3% at most.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available