4.7 Article

Sensitive region-aware black-box adversarial attacks

期刊

INFORMATION SCIENCES
卷 637, 期 -, 页码 -

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2023.04.008

关键词

Deep learning; Adversarial example; Sensitive region; Imperception attack

向作者/读者索取更多资源

Recent research has shown that deep neural networks (DNNs) are vulnerable to perturbations in adversarial attacks. However, existing approaches generate global perturbations that are visible to human eyes, limiting their effectiveness in real-world scenarios. This paper proposes a new framework called Sensitive Region-Aware Attack (SRA) which generates imperceptible black-box adversarial examples by identifying sensitive regions and using evolution strategies. Experimental results demonstrate a high success rate of our SRA in achieving imperceptible black-box attacks using only a limited number of image pixels.
Recent research on adversarial attacks has highlighted the vulnerability of deep neural networks (DNNs) to perturbations. While existing studies generate adversarial perturbations spread across the entire image, these global perturbations may be visible to human eyes, reducing their effectiveness in real-world scenarios. To alleviate this issue, recent works propose to modify a limited number of input pixels to implement adversarial attacks. However, these approaches still have limitations in terms of both imperceptibility and efficiency. This paper proposes a novel plug-in framework called Sensitive Region-Aware Attack (SRA) to generate soft-label black-box adversarial examples using the sensitivity map and evolution strategies. First, a transferable black-box sensitivity map generation approach is proposed for identifying the sensitive regions of input images. To perform SRA with a limited amount of perturbed pixels, a dynamic l(0) and l(infinity) adjustment strategy is introduced. Furthermore, an adaptive evolution strategy is employed to optimize the selection of generated sensitive regions, allowing for the execution of effective and imperceptible attacks. Experimental results demonstrate that our SRA achieves an imperceptible soft-label black-box attack with a 96.43% success rate using less than 20% of the image pixels on ImageNet and a 100% success rate using 30% of the image pixels on CIFAR-10.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据