4.8 Article

Accuracy Degrading: Toward Participation-Fair Federated Learning

期刊

IEEE INTERNET OF THINGS JOURNAL
卷 10, 期 12, 页码 10291-10306

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2023.3238038

关键词

Federated learning; Training; Servers; Differential privacy; Data models; Privacy; Adaptation models; Data privacy; deep learning; fairness; federated learning

向作者/读者索取更多资源

Centralized learning faces constraints in data mapping and security, while federated learning with a distributed architecture addresses these issues by training locally and protecting data privacy. However, fairness becomes a concern in real-world applications of federated learning.
Centralized learning now faces data mapping and security constraints that make it difficult to carry out. Federated learning with a distributed learning architecture has changed this situation. By restricting the training process to participants' local, federated learning addresses the model training needs of multiple data sources while better protecting data privacy. However, in real-world application scenarios, federated learning faces the need to achieve fairness in addition to privacy protection. In practice, it could happen that some federated learning participants with specific motives may short join the training process to obtain the current global model with only limited contribution to the federated learning whole, resulting in unfairness to the participants who previously participated in federated learning. We propose the FedACC framework with a server-initiated global model accuracy control method to address this issue. Besides measuring the accumulative contributions of newly joined participants and providing participants with a model with an accuracy that matches their contributions, the FedACC still guarantees the validity of participant gradients based on the accuracy-decayed model. Under the FedACC framework, users do not have access to the full version of the current global model early in their training participation. However, they must produce a certain amount of contributions before seeing the full-accuracy model. We introduce an additional differential privacy mechanism to protect clients' privacy further. Experiments demonstrate that the FedACC could obtain about 10%-20% accuracy gain compared to the state-of-the-art methods while balancing the fairness, performance, and security of federated learning.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据