4.8 Article

Boosting Accuracy of Differentially Private Federated Learning in Industrial IoT With Sparse Responses

Journal

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS
Volume 19, Issue 1, Pages 910-920

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TII.2022.3161517

Keywords

Industrial Internet of Things; Privacy; 5G mobile communication; Collaborative work; Training; Data models; Informatics; Differential privacy (DP); federated learning (FL); Industrial Internet of Things (IIoT); sparse vector technique (SVT)

Ask authors/readers for more resources

This study proposes a novel differentially private federated learning algorithm with sparse responses (DPFL-SR), which reduces the privacy budget consumption in each global iteration by applying the sparse vector technique. Experimental results demonstrate that DPFL-SR achieves higher model accuracy and privacy protection level in IIoT systems.
Empowered by 5G, it has been extensively explored by existing works on the deployment of differentially private federated learning (DPFL) in the Industrial Internet of Things (IIoT). Through federated learning, decentralized IIoT devices can collaboratively train a machine learning model by merely exchanging model gradients with a parameter server (PS) for multiple global iterations. Differentially private (DP) mechanisms will be incorporated by IIoT devices (also called clients) to prevent the leakage of privacy due to the exposure of gradients because original gradients will be distorted DP noises. Yet, learning with distorted gradients can seriously deteriorate model accuracy, making DPFL unusable in reality. To address this problem, we propose a novel DPFL with sparse responses (DPFL-SR) algorithm, which applies the sparse vector technique to reduce the privacy budget consumption in each global iteration. Specifically, DPFL-SR evaluates the value of each gradient, and only distorts and uploads significant gradients to the PS because significant gradients are more essential for model training. Since insignificant gradients are not disclosed, the reserved privacy budget can be used to return significant gradients for more iterations so that DPFL-SR can achieve higher model accuracy without lowering the privacy protection level. Extensive experiments are conducted with the MNIST and Fashion-MNIST datasets to demonstrate the practicability and superiority of DPFL-SR in IIoT systems.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available