4.5 Article

Clean-label poisoning attacks on federated learning for IoT

Journal

EXPERT SYSTEMS
Volume 40, Issue 5, Pages -

Publisher

WILEY
DOI: 10.1111/exsy.13161

Keywords

clean label attack; edge-cloud collaboration; federated learning; IoT

Ask authors/readers for more resources

Federated Learning (FL) is suitable for distributed edge collaboration in the Internet of Things (IoT) and provides data security and privacy. However, the latest research shows that FL is vulnerable to poisoning attacks. To address this challenge, the clean-label attacks are proposed for edge-cloud synergistic FL.
Federated Learning (FL) is suitable for the application scenarios of distributed edge collaboration of the Internet of Things (IoT). It can provide data security and privacy, which is why it is widely used in the IoT applications such as Industrial IoT (IIoT). Latest research shows that the federated learning framework is vulnerable to poisoning attacks in the case of an active attack by the adversary. However, the existing backdoor attack methods are easy to be detected by the defence methods. To address this challenge, we focus on edge-cloud synergistic FL clean-label attacks. Unlike common backdoor attack, to ensure the attack's concealment, we add a small perturbation to realize the clean label attack by judging the cosine similarity between the gradient of the adversarial loss and the gradient of the normal training loss. In order to improve the attack success rate and robustness, the attack is implemented when the global model is about to converge. The experimental results verified that 1% of poisoned data could make an attack successful with a high probability. Our method maintains stealth while performing model poisoning attacks, and the average Peak Signal-to-Noise Ratio (PSNR) of poisoning images reaches over 30 dB, and the average Structural SIMilarity (SSIM) is close to 0.93. Most importantly, our attack method can bypass the Byzantine aggregation defence.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available