4.6 Article

Invisible Adversarial Attacks on Deep Learning-Based Face Recognition Models

期刊

IEEE ACCESS
卷 11, 期 -, 页码 51567-51577

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2023.3279488

关键词

Face recognition; Perturbation methods; Deep learning; Image segmentation; Facial features; Neural networks; Adversarial attack; deep learning; face recognition

向作者/读者索取更多资源

Deep learning technology has achieved great success in computer vision and has been widely applied in daily life, such as face recognition systems. However, the potential security risks of deep neural networks have been revealed, particularly in face recognition systems. Existing adversarial face images are easy to identify, making it difficult for attackers to carry out practical attacks.
Deep learning technology has grown rapidly in recent years and achieved tremendous success in the field of computer vision. At present, many deep learning technologies have been applied in daily life, such as face recognition systems. However, as human life increasingly relies on deep neural networks, the potential harms of neural networks are being revealed, particularly in terms of deep neural network security. More and more studies have shown that existing deep learning-based face recognition models are vulnerable to attacks by adversarial samples, resulting in misjudgments that could have serious consequences. However, existing adversarial face images are rather easy to identify with the naked eye, so it is difficult for attackers to carry out attacks on face recognition systems in practice. This paper proposes a method for generating adversarial face images that are indistinguishable from the source images based on facial landmark detection and superpixel segmentation. First, the eyebrows, eyes, nose, and mouth regions are extracted from the face image using a facial landmark detection algorithm. Next, the superpixel segmentation algorithm is used to include the pixels neighboring the extracted facial landmarks with similar pixel values. Lastly, the segmented regions are used as masks to guide existing attack methods to insert adversarial noise within the masked areas. Experimental results show that our method can generate adversarial samples with high Structural Similarity Index Measure (SSIM) values at the cost of a small percentage of attack success rate. In addition, to simulate real-time physical attacks, printouts of the adversarial images generated by the proposed method are presented to the face recognition system via a camera and are still able to fool the face recognition model. Experimental results indicated that the proposed method can successfully perform adversarial attacks on face recognition systems in real-world scenarios.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据