4.6 Article

Untargeted white-box adversarial attack to break into deep leaning based COVID-19 monitoring face mask detection system

期刊

出版社

SPRINGER
DOI: 10.1007/s11042-023-15405-x

关键词

COVID-19; Adversarial example; Face mask recognition; Adversarial attacks; Deep learning; Robustness

向作者/读者索取更多资源

This article demonstrates the vulnerability of current deep learning-based face mask detection systems to adversarial attacks. A robust framework for face mask detection system is proposed to resist such attacks. The framework employs fine-tuning of the MobileNetV2 model on a custom-built dataset, achieving an accuracy of 95.83% on test data. Adversarial images generated by the fast gradient sign method (FGSM) significantly reduce the model's classification accuracy to 14.53%. However, the proposed robust framework enhances the model's resistance to adversarial attacks and achieves an accuracy of 92% on adversarial data.
The face mask detection system has been a valuable tool to combat COVID-19 by preventing its rapid transmission. This article demonstrated that the present deep learning-based face mask detection systems are vulnerable to adversarial attacks. We proposed a framework for a robust face mask detection system that is resistant to adversarial attacks. We first developed a face mask detection system by fine-tuning the MobileNetv2 model and training it on the custom-built dataset. The model performed exceptionally well, achieving 95.83% of accuracy on test data. Then, the model's performance is assessed using adversarial images calculated by the fast gradient sign method (FGSM). The FGSM attack reduced the model's classification accuracy from 95.83% to 14.53%, indicating that the adversarial attack on the proposed model severely damaged its performance. Finally, we illustrated that the proposed robust framework enhanced the model's resistance to adversarial attacks. Although there was a notable drop in the accuracy of the robust model on unseen clean data from 95.83% to 92.79%, the model performed exceptionally well, improving the accuracy from 14.53% to 92% on adversarial data. We expect our research to heighten awareness of adversarial attacks on COVID-19 monitoring systems and inspire others to protect healthcare systems from similar attacks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据