Journal
CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE
Volume 34, Issue 7, Pages -Publisher
WILEY
DOI: 10.1002/cpe.5906
Keywords
federated learning; generative adversarial networks; model security; poisoning attacks
Ask authors/readers for more resources
This article discusses the privacy protection issues of federated learning, a collaborative learning framework widely used in the Internet of Things (IoT), and the threat of poisoning attacks it faces. A defense mechanism is proposed using generative adversarial networks to generate auditing data for detecting and mitigating poisoning attacks.
In the age of the Internet of Things (IoT), large numbers of sensors and edge devices are deployed in various application scenarios; Therefore, collaborative learning is widely used in IoT to implement crowd intelligence by inviting multiple participants to complete a training task. As a collaborative learning framework, federated learning is designed to preserve user data privacy, where participants jointly train a global model without uploading their private training data to a third party server. Nevertheless, federated learning is under the threat of poisoning attacks, where adversaries can upload malicious model updates to contaminate the global model. To detect and mitigate poisoning attacks in federated learning, we propose a poisoning defense mechanism, which uses generative adversarial networks to generate auditing data in the training procedure and removes adversaries by auditing their model accuracy. Experiments conducted on two well-known datasets, MNIST and Fashion-MNIST, suggest that federated learning is vulnerable to the poisoning attack, and the proposed defense method can detect and mitigate the poisoning attack.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available