4.8 Article

D2MIF: A Malicious Model Detection Mechanism for Federated-Learning-Empowered Artificial Intelligence of Things

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 10, Issue 3, Pages 2141-2151

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2021.3081606

Keywords

Data models; Servers; Collaborative work; Training; Computational modeling; Industrial Internet of Things; Biological system modeling; Artificial Intelligence of Things (AIoT); federated learning; isolation forest (iforest); poisoning attack; security

Ask authors/readers for more resources

Artificial Intelligence of Things (AIoT), a fusion of AI and IoT, is a new trend in realizing industry 4.0 intelligentization, with data privacy and security being essential. In AIoT, federated learning is introduced to enhance data privacy protection by jointly training AI models without sharing private data. However, malicious participants can launch poisoning attacks, jeopardizing model convergence and accuracy. To solve this problem, we propose D2MIF, a malicious model detection mechanism based on iforest, which filters out models with malicious scores higher than a dynamically adjusted threshold using reinforcement learning. Experimental results on Mnist and Fashion_Mnist datasets show that D2MIF effectively detects malicious models and improves global model accuracy in federated learning-empowered AIoT.
Artificial Intelligence of Things (AIoT), as a fusion of artificial intelligence (AI) and Internet of Things (IoT), has become a new trend to realize the intelligentization of industry 4.0 and the data privacy and security is the key to its successful implementation. To enhance data privacy protection, the federated learning has been introduced in AIoT, which allows participants to jointly train AI models without sharing private data. However, in federated learning, malicious participants might provide malicious models by launching the poisoning attack, which will jeopardize the convergence and accuracy of the global model. To solve this problem, we propose a malicious model detection mechanism based on the isolation forest (iforest), named D2MIF, for the federated learning-empowered AIoT. In D2MIF, an iforest is constructed to compute the malicious score for each model uploaded by the corresponding participant, and then, the models will be filtered if their malicious scores are higher than the threshold, which is dynamically adjusted using reinforcement learning (RL). The validation experiment is conducted on two public data sets Mnist and Fashion_Mnist. The experimental results show that the proposed D2MIF can effectively detect malicious models and significantly improve the global model accuracy in federated learning-empowered AIoT.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available