Journal
PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023
Volume -, Issue -, Pages 122-135Publisher
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3579856.3590334
Keywords
Federated learning; membership inference attack; privacy leakage
Ask authors/readers for more resources
Federated learning (FL) is vulnerable to poisoning membership inference attacks (MIA), and existing server-side robust aggregation algorithms (AGRs) are insufficient in mitigating these attacks. Therefore, a new client-side defense mechanism called LoDen is proposed to detect suspicious privacy attacks and mitigate poisoning MIA. Experimental evaluation shows that LoDen consistently achieves a 0% missing rate in detecting poisoning MIA and reduces the success rate of these attacks to 0% in most cases.
Federated learning (FL) is a widely used distributed machine learning framework. However, recent studies have shown its susceptibility to poisoning membership inference attacks (MIA). In MIA, adversaries maliciously manipulate the local updates on selected samples and share the gradients with the server (i.e., poisoning). Since honest clients perform gradient descent on samples locally, an adversary can distinguish whether the attacked sample is a training sample based on observation of the change of the sample's prediction. This type of attack exacerbates traditional passive MIA, yet the defense mechanisms remain largely unexplored. In this work, we first investigate the effectiveness of the existing server-side robust aggregation algorithms (AGRs), designed to counter general poisoning attacks, in defending against poisoning MIA. We find that they are largely insufficient in mitigating poisoning MIA, as it targets specific victim samples and has minimal impact on model performance, unlike general poisoning. Thus, we propose a new client-side defense mechanism, called LoDen, which leverages the clients' unique ability to detect any suspicious privacy attacks. We theoretically quantify the membership information leaked to the poisoning MIA and provide a bound for this leakage in LoDen. We perform an extensive experimental evaluation on four benchmark datasets against poisoning MIA, comparing LoDen with six state-of-the-art server-side AGRs. LoDen consistently achieves 0% missing rate in detecting poisoning MIA across all settings, and reduces the poisoning MIA success rate to 0% in most cases. The code of LoDen is available at https://github.com/UQ- Trust-Lab/LoDen.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available