3.8 Proceedings Paper

LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3579856.3590334

关键词

Federated learning; membership inference attack; privacy leakage

向作者/读者索取更多资源

Federated learning (FL) is vulnerable to poisoning membership inference attacks (MIA), and existing server-side robust aggregation algorithms (AGRs) are insufficient in mitigating these attacks. Therefore, a new client-side defense mechanism called LoDen is proposed to detect suspicious privacy attacks and mitigate poisoning MIA. Experimental evaluation shows that LoDen consistently achieves a 0% missing rate in detecting poisoning MIA and reduces the success rate of these attacks to 0% in most cases.
Federated learning (FL) is a widely used distributed machine learning framework. However, recent studies have shown its susceptibility to poisoning membership inference attacks (MIA). In MIA, adversaries maliciously manipulate the local updates on selected samples and share the gradients with the server (i.e., poisoning). Since honest clients perform gradient descent on samples locally, an adversary can distinguish whether the attacked sample is a training sample based on observation of the change of the sample's prediction. This type of attack exacerbates traditional passive MIA, yet the defense mechanisms remain largely unexplored. In this work, we first investigate the effectiveness of the existing server-side robust aggregation algorithms (AGRs), designed to counter general poisoning attacks, in defending against poisoning MIA. We find that they are largely insufficient in mitigating poisoning MIA, as it targets specific victim samples and has minimal impact on model performance, unlike general poisoning. Thus, we propose a new client-side defense mechanism, called LoDen, which leverages the clients' unique ability to detect any suspicious privacy attacks. We theoretically quantify the membership information leaked to the poisoning MIA and provide a bound for this leakage in LoDen. We perform an extensive experimental evaluation on four benchmark datasets against poisoning MIA, comparing LoDen with six state-of-the-art server-side AGRs. LoDen consistently achieves 0% missing rate in detecting poisoning MIA across all settings, and reduces the poisoning MIA success rate to 0% in most cases. The code of LoDen is available at https://github.com/UQ- Trust-Lab/LoDen.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据