4.7 Article

Model compression and privacy preserving framework for federated learning

Publisher

ELSEVIER
DOI: 10.1016/j.future.2022.10.026

Keywords

Federated learning; Privacy preserving; Model compression; Convolutional neural networks

Ask authors/readers for more resources

This paper proposes a model compression based federated learning framework that effectively reduces the size of the model and protects privacy while maintaining performance. The proposed perturbed model compression method and reconstruction algorithm contribute to achieving these objectives.
Federated learning (FL) as a collaborative learning paradigm has attracted extensive attention due to its characteristic of privacy preserving, in which the clients train a shared neural network model collaboratively by their local dataset and upload their model parameters merely instead of original data by wireless network in the whole training process. Because FL reduces transmission significantly, it can further meets the efficiency and security of the next generation wireless system. Although FL has reduced the size of information that needs to be transmitted, the update of model parameters still suffers from privacy leakage and communication bottleneck especially in wireless networks. To address the problem of privacy and communication, this paper proposes a model compression based FL framework. Firstly, the designed model compression framework provides effective support for efficient and secure model parameters updating in FL while keeping the personalization of all clients. Then, the proposed perturbed model compression method can further reduce the size of the model and protect the privacy of the model without sacrificing much accuracy. Besides, it also facilitates the simultaneous execution of decryption and decompressing operations by reconstruction algorithm on encrypted and compressed model parameters which is obtained by the proposed perturbed model compression method. Finally, the illustrative results demonstrate that the proposed model compression based FL framework can significantly reduce the number of model parameters for uploading with a strong privacy preservation property. For example, when the compression ratio is 0.0953 (i.e., only 9.53% of the parameters are uploaded), the accuracy of MNIST achieves 97% while the accuracy without compression is 98%. (c) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available