4.7 Article

Crafting transferable adversarial examples via contaminating the salient feature variance

Journal

INFORMATION SCIENCES
Volume 644, Issue -, Pages -

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2023.119273

Keywords

Deep neural network; Adversarial transferability; Salient feature variance attack; Combined feature enhancement transformation

Ask authors/readers for more resources

Adversarial attacks are important in the development of deep learning techniques as they evaluate the robustness of deep neural networks and explore their decision mechanism. Feature-level attacks have been proposed to contaminate internal feature maps and produce transferable adversarial examples. In this paper, an ingenious Salient Feature Variance Attack (SFVA) is proposed, which addresses neglected problems and achieves state-of-the-art performance. Experiments confirm the superiority of SFVA on the ImageNet dataset, highlighting the serious security threats faced by real-world deployed models.
Adversarial attacks play a vital role in the development of deep learning techniques, which can evaluate the robustness of deep neural networks (DNNs) as well as explore their decision mechanism. Recently, feature-level attacks have been proposed to contaminate the internal feature maps of the source model at each iteration, providing a new method to produce transferable adversarial examples. In this paper, we uncover two neglected problems behind current feature-level attacks and propose an ingenious Salient Feature Variance Attack (SFVA). Concretely, we first apply a Combined Feature Enhancement Transformation (CFET) on the copies of clean images to estimate the optimal feature weight. Then we construct an efficient objective based on the variance of salient features and adopt a classical attack MI-FGSM (MI) to add adversarial noises to the clean image along the direction of gradients. Moreover, we also make it possible to combine the ensemble strategy with feature-level attacks. Abundant experiments on the ImageNet dataset forcefully confirm the superiority of SFVA, which has become a state-ofthe-art feature-level attack. Furthermore, we also evaluate the robustness of the practical online model with SFVA, where the 90% attack success rate reveals a worrying fact that the real-world deployed models are subject to serious security threats.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available