4.6 Article

Adversarial Attacks Against Face Recognition: A Comprehensive Study

期刊

IEEE ACCESS
卷 9, 期 -, 页码 92735-92756

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3092646

关键词

Face recognition; Deep learning; Training; Computer architecture; Standards; Three-dimensional displays; Taxonomy; Biometrics; face recognition; adversarial attacks; adversarial perturbation; deep learning

向作者/读者索取更多资源

Despite the reliable verification performance of face recognition systems, they have shown vulnerability to adversarial attacks prompting the development of new countermeasures. Existing attack and defense methods are classified based on different criteria, with a focus on the challenges and potential research directions ahead.
Face recognition (FR) systems have demonstrated reliable verification performance, suggesting suitability for real-world applications ranging from photo tagging in social media to automated border control (ABC). In an advanced FR system with deep learning-based architecture, however, promoting the recognition efficiency alone is not sufficient, and the system should also withstand potential kinds of attacks. Recent studies show that (deep) FR systems exhibit an intriguing vulnerability to imperceptible or perceptible but natural-looking adversarial input images that drive the model to incorrect output predictions. In this article, we present a comprehensive survey on adversarial attacks against FR systems and elaborate on the competence of new countermeasures against them. Further, we propose a taxonomy of existing attack and defense methods based on different criteria. We compare attack methods on the orientation, evaluation process, and attributes, and defense approaches on the category. Finally, we discuss the challenges and potential research direction.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据