4.7 Article

Adversarial color projection: A projector-based physical-world attack to DNNs

Related references

Note: Only part of the references are listed.
Article Computer Science, Artificial Intelligence

Adaptive momentum variance for attention-guided sparse adversarial attacks

Chao Li et al.

Summary: It has been found that deep neural networks are vulnerable to adversarial examples for several years. Existing transfer-based methods have weak transferability for black-box models and sparse attacks mainly focus on the number of attacked pixels without restricting the size of perturbations. To address these issues, this study proposes a transfer-based sparse attack method that improves transferability through adaptive momentum variance and refining perturbation mechanism, and uses a class activation map to explore the relationship between the number of perturbed pixels and attack performance.

PATTERN RECOGNITION (2023)

Article Computer Science, Artificial Intelligence

Robust Physical-World Attacks on Face Recognition

Xin Zheng et al.

Summary: This study investigates the impact of sticker-based physical attacks on face recognition and proposes a novel robust physical attack framework to simulate adversarial stickers under different physical-world conditions. The Curriculum Adversarial Attack algorithm gradually adapts to environmental variations and improves the attack performance. A standardized testing protocol is constructed for fair evaluation.

PATTERN RECOGNITION (2023)

Article Computer Science, Artificial Intelligence

Boosting transferability of physical attack against detectors by redistributing separable attention

Yu Zhang et al.

Summary: The research on attack transferability is important for guiding adversarial attacks without prior knowledge of target models. However, maintaining good attack transferability for adversarial examples, particularly in black-box attacks in the physical world, remains challenging. To enhance black-box transferability of physical attacks on object detectors, a novel adversarial learning method is proposed to produce adversarial patches by redistributing separable attention maps.

PATTERN RECOGNITION (2023)

Article Computer Science, Artificial Intelligence

Bayesian evolutionary optimization for crafting high-quality adversarial examples with limited query budget

Chao Li et al.

Summary: Due to the importance of security, adversarial attacks in deep learning, especially the black-box adversarial attack, which mimics real-world scenarios, have gained popularity. Query-based methods are commonly used for black-box attacks but suffer from needing excessive queries. To overcome this, a Bayesian evolutionary optimization (BEO) based black-box attack method using differential evolution is proposed, employing Gaussian processes model and adaptive acquisition functions. Experimental results show that this method can effectively generate high-quality adversarial examples using only 200 queries.

APPLIED SOFT COMPUTING (2023)

Article Computer Science, Theory & Methods

Generating Adversarial Images in Quantized Domains

Benoit Bonnet et al.

Summary: This paper proposes a method dedicated to quantizing adversarial perturbations while minimizing quantization error and maintaining image adversarial after quantization. The method operates in both spatial and JPEG domains with low complexity.

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY (2022)

Article Computer Science, Artificial Intelligence

Query-Efficient Black-Box Adversarial Attacks Guided by a Transfer-Based Prior

Yinpeng Dong et al.

Summary: This paper focuses on adversarial attacks in the black-box setting, where the adversary needs to generate adversarial examples without access to the gradients of the target model. Previous methods either approximated the true gradient using the transfer gradient of a surrogate white-box model or relied on model queries for feedback. However, these methods suffer from low attack success rates or poor query efficiency due to the difficulty of estimating gradients in a high-dimensional input space with limited information. To address this issue, this paper proposes two prior-guided random gradient-free algorithms based on biased sampling and gradient averaging. These methods integrate the advantages of a transfer-based prior given by the gradient of a surrogate model and query information simultaneously, resulting in higher attack success rates with fewer queries compared to existing state-of-the-art methods.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2022)

Article Computer Science, Theory & Methods

TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems

Bao Gia Doan et al.

Summary: Deep neural networks are vulnerable to attacks from adversarial inputs and Trojans. This study introduces a class of spatially bounded, physically realizable adversarial examples called Universal NaTuralistic adversarial paTches (TnTs). TnTs are highly effective and universal, allowing an attacker to exert a greater level of control and deploy patches in the physical world. Extensive experiments demonstrate the realistic threat from TnTs and their robustness against state-of-the-art deep neural networks.

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY (2022)

Article Computer Science, Artificial Intelligence

Cross-database and cross-attack Iris presentation attack detection using micro stripes analyses

Meiling Fang et al.

Summary: This paper proposes a novel framework for detecting iris presentation attacks, especially for detecting contact lenses, and demonstrates superior performance in three databases through in-depth experimental evaluation. The method effectively differentiates between attack and bona fide presentations, with better generalizability compared to other methods.

IMAGE AND VISION COMPUTING (2021)

Article Computer Science, Information Systems

An improved ShapeShifter method of generating adversarial examples for physical attacks on stop signs against Faster R-CNNs

Shize Huang et al.

Summary: Researchers proposed an improved ShapeShifter method to generate adversarial examples against Faster R-CNN object detectors by adding white Gaussian noise to the optimization function, successfully attacking stop signs in both English and Chinese with better robustness.

COMPUTERS & SECURITY (2021)

Article Computer Science, Theory & Methods

Adversarial Adaptive Neighborhood With Feature Importance-Aware Convex Interpolation

Qian Li et al.

Summary: This paper addresses the issues in optimization-based adversarial attacks and defenses by introducing a new method FeaCP to generate more explainable adversarial samples using correct predicted samples, which aims to repair bugs in pre-trained deep learning models. The proposed method takes into account the individual importance of feature ingredients and limits the search space to obtain an adaptive neighborhood for constructing a path that optimizes the approximated decision boundary. Experimental results demonstrate the competitive performance of FeaCP on various datasets and networks.

IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY (2021)

Article Computer Science, Artificial Intelligence

Face presentation attack detection in mobile scenarios: A comprehensive evaluation

Shan Jia et al.

IMAGE AND VISION COMPUTING (2020)

Article Computer Science, Artificial Intelligence

Principal Component Adversarial Example

Yonggang Zhang et al.

IEEE TRANSACTIONS ON IMAGE PROCESSING (2020)