4.7 Article

Compound adversarial examples in deep neural networks q

期刊

INFORMATION SCIENCES
卷 613, 期 -, 页码 50-68

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2022.08.031

关键词

Compound adversarial example; Adversarial patch; Adversarial perturbation; Deep neural network

资金

  1. National Key Research and Development Program of China [2021YFB3101201]
  2. Hunan Provincial Natural Science Foundation [2021JJ30685]
  3. Natural Science Foundation of China [62172349]
  4. Hunan Province Department of Education [21B0120]
  5. Hunan Science and Technology Planning Project [2019RS3019]

向作者/读者索取更多资源

This paper introduces a method for generating compound adversarial examples that combines perturbation and patch attack modes. The experiments demonstrate that compound attack can improve the generative efficiency of adversarial examples and achieve higher attack success rate with fewer iteration steps. The compound adversarial examples also successfully attack defensive mechanisms that were previously able to defend against perturbation or patch attacks.
Although deep learning has made great progress in many fields, they are still vulnerable to adversarial examples. Many methods for generating adversarial examples have been proposed, which either contain adversarial perturbation or patch. In this paper, we explore the method that creates compound adversarial examples including both perturbation and patch. We show that fusing two weak attack modes can produce more powerful adversarial examples, where the patch covers only 1% of the pixels at random location in the image, and the perturbation changes only by 2/255 in the original pixel value (scale to 0-1). For both targeted attack and untargeted attack, compound attack can improve the generative efficiency of adversarial examples, and can attain higher attack success rate with fewer iteration steps. The compound adversarial examples successfully attack the models with defensive mechanisms that previously can defend perturbation attack or patch attack. Furthermore, the compound adversarial examples show good transferability on normal trained classifiers and adversarial trained classifiers. Experimental results on a series of widely used classifiers and defense models show that the proposed compound adversarial examples have strong robustness, high effectiveness, and good transferability. (c) 2022 Published by Elsevier Inc.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据