4.7 Article

Compound adversarial examples in deep neural networks q

Journal

INFORMATION SCIENCES
Volume 613, Issue -, Pages 50-68

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2022.08.031

Keywords

Compound adversarial example; Adversarial patch; Adversarial perturbation; Deep neural network

Funding

  1. National Key Research and Development Program of China [2021YFB3101201]
  2. Hunan Provincial Natural Science Foundation [2021JJ30685]
  3. Natural Science Foundation of China [62172349]
  4. Hunan Province Department of Education [21B0120]
  5. Hunan Science and Technology Planning Project [2019RS3019]

Ask authors/readers for more resources

This paper introduces a method for generating compound adversarial examples that combines perturbation and patch attack modes. The experiments demonstrate that compound attack can improve the generative efficiency of adversarial examples and achieve higher attack success rate with fewer iteration steps. The compound adversarial examples also successfully attack defensive mechanisms that were previously able to defend against perturbation or patch attacks.
Although deep learning has made great progress in many fields, they are still vulnerable to adversarial examples. Many methods for generating adversarial examples have been proposed, which either contain adversarial perturbation or patch. In this paper, we explore the method that creates compound adversarial examples including both perturbation and patch. We show that fusing two weak attack modes can produce more powerful adversarial examples, where the patch covers only 1% of the pixels at random location in the image, and the perturbation changes only by 2/255 in the original pixel value (scale to 0-1). For both targeted attack and untargeted attack, compound attack can improve the generative efficiency of adversarial examples, and can attain higher attack success rate with fewer iteration steps. The compound adversarial examples successfully attack the models with defensive mechanisms that previously can defend perturbation attack or patch attack. Furthermore, the compound adversarial examples show good transferability on normal trained classifiers and adversarial trained classifiers. Experimental results on a series of widely used classifiers and defense models show that the proposed compound adversarial examples have strong robustness, high effectiveness, and good transferability. (c) 2022 Published by Elsevier Inc.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available