4.7 Article

PrivMaskFL: A private masking approach for heterogeneous federated learning in IoT

Journal

COMPUTER COMMUNICATIONS
Volume 214, Issue -, Pages 100-112

Publisher

ELSEVIER
DOI: 10.1016/j.comcom.2023.11.022

Keywords

Federated learning; Parameter divergence; Participant masking; Differential privacy; Greedy algorithm

Ask authors/readers for more resources

With the rapid growth of data in the era of IoT, the challenge of data privacy protection arises. This article proposes a federated learning approach that uses collaborative training to obtain a global model without direct exposure to local datasets. By utilizing dynamic masking and adaptive differential privacy methods, the approach reduces communication overhead and improves the converge performance of the model.
With the arrival of the era of the Internet of Things (IoT), the rapid development of new technologies such as artificial intelligence, big data, and other advanced techniques, data is growing geometrically. These data are prone to form data silos, scattered in various places. Jointly decentralized data applications will accelerate the progress and development of the times, but also face the challenge of data privacy protection. Federated learning (FL), a new branch of distributed machine learning, is trained by collaborative training to obtain global model without direct exposure to local datasets. Some studies have shown that typically federated learning involves a larger number of participants, it can lead to a significant increase in communication overhead, resulting in issues such as higher latency and bandwidth consumption. We suggest masking a subset of diverse participants and allowing the remaining participants to proceed with the next communication round of updates. Our aim is for reducing the communication overhead and improving the convergence performance of the global model on the premise of heterogeneous data privacy protection. We design a private masking approach PrivMaskFL to address two problems. We firstly propose a dynamic participant aggregation masking approach, which adopts the greedy ideology to select the relatively important participants and mask the unimportant ones; secondly, we design an adaptive differential privacy approach, which adaptively stratifies privacy budget according to the characteristics of participant, allocates the budget in a fine-grained stratified level, and adds Gaussian noise reasonably. Specifically, in each communication round, the participant's model needs to perform local differential privacy noise addition for uplink parameter transmission; the server aggregates to acquire global model, finds a candidate participant subset based on the smaller parameter divergence by using the greedy algorithm approximation for t + 1-th communication round for downlink parameter transmission. Subsequently, the privacy budget sequence is divided and granted to the participants of the stratified level, and the Gaussian noise addition of adaptive differential privacy is completed to achieve privacy protection without compromising the model's usability. In experiments, our approach reduces the communication overhead and improve the convergence performance. Furthermore, our approach achieves higher accuracy and robust variance on both FMNIST and FEMNIST datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available