4.8 Article

Accuracy Degrading: Toward Participation-Fair Federated Learning

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 10, Issue 12, Pages 10291-10306

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2023.3238038

Keywords

Federated learning; Training; Servers; Differential privacy; Data models; Privacy; Adaptation models; Data privacy; deep learning; fairness; federated learning

Ask authors/readers for more resources

Centralized learning faces constraints in data mapping and security, while federated learning with a distributed architecture addresses these issues by training locally and protecting data privacy. However, fairness becomes a concern in real-world applications of federated learning.
Centralized learning now faces data mapping and security constraints that make it difficult to carry out. Federated learning with a distributed learning architecture has changed this situation. By restricting the training process to participants' local, federated learning addresses the model training needs of multiple data sources while better protecting data privacy. However, in real-world application scenarios, federated learning faces the need to achieve fairness in addition to privacy protection. In practice, it could happen that some federated learning participants with specific motives may short join the training process to obtain the current global model with only limited contribution to the federated learning whole, resulting in unfairness to the participants who previously participated in federated learning. We propose the FedACC framework with a server-initiated global model accuracy control method to address this issue. Besides measuring the accumulative contributions of newly joined participants and providing participants with a model with an accuracy that matches their contributions, the FedACC still guarantees the validity of participant gradients based on the accuracy-decayed model. Under the FedACC framework, users do not have access to the full version of the current global model early in their training participation. However, they must produce a certain amount of contributions before seeing the full-accuracy model. We introduce an additional differential privacy mechanism to protect clients' privacy further. Experiments demonstrate that the FedACC could obtain about 10%-20% accuracy gain compared to the state-of-the-art methods while balancing the fairness, performance, and security of federated learning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available