4.7 Article

TDFL: Truth Discovery Based Byzantine Robust Federated Learning

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPDS.2022.3205714

关键词

Federated learning; truth discovery; poisoning attack

资金

  1. National Natural Science Foundation of China [61972037, 61872041, U1804263]
  2. China Postdoctoral Science Foundation [2021M700435, 2021TQ0042]
  3. National Cryptography Development Fund [MMJJ20180412]

向作者/读者索取更多资源

This study proposes a novel federated learning method called TDFL, which uses truth discovery to defend against multiple poisoning attacks. The method is designed to address different attack scenarios and achieves strong robustness without relying on additional datasets.
Federated learning (FL) enables data owners to train a joint global model without sharing private data. However, it is vulnerable to Byzantine attackers that can launch poisoning attacks to destroy model training. Existing defense strategies rely on the additional datasets to train trustable server models or trusted execution environments to mitigate attacks. Besides, these strategies can only tolerate a small number of malicious users or resist a few types of poisoning attacks. To address these challenges, we design a novel federated learning method TDFL, Truth Discovery based Federated Learning, which can defend against multiple poisoning attacks without additional datasets even when the Byzantine users are >= 50%. Specifically, the TDFL considers different scenarios with different malicious proportions. For Honest-majority setting (Byzantine < 50%), we design a special robust truth discovery aggregation scheme to remove malicious model updates, which can assign weights according to users' contribution; for Byzantine-majority setting (Byzantine >= 50%), we use maximum clique-based filter to guarantee global model quality. To the best of our knowledge, this is the first study that uses truth discovery to defend against poisoning attacks. It is also the first scheme which can achieve strong robustness under multiple kinds of attacks launched by high proportion attackers without root datasets. Extensive comparative experiments are designed with five state-of-the-art aggregation rules under five types of classical poisoning attacks on different datasets. The experimental results demonstrate that TDFL is practical and achieves reasonable Byzantine-robustness.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据