4.7 Article

Sensitive region-aware black-box adversarial attacks

Journal

INFORMATION SCIENCES
Volume 637, Issue -, Pages -

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2023.04.008

Keywords

Deep learning; Adversarial example; Sensitive region; Imperception attack

Ask authors/readers for more resources

Recent research has shown that deep neural networks (DNNs) are vulnerable to perturbations in adversarial attacks. However, existing approaches generate global perturbations that are visible to human eyes, limiting their effectiveness in real-world scenarios. This paper proposes a new framework called Sensitive Region-Aware Attack (SRA) which generates imperceptible black-box adversarial examples by identifying sensitive regions and using evolution strategies. Experimental results demonstrate a high success rate of our SRA in achieving imperceptible black-box attacks using only a limited number of image pixels.
Recent research on adversarial attacks has highlighted the vulnerability of deep neural networks (DNNs) to perturbations. While existing studies generate adversarial perturbations spread across the entire image, these global perturbations may be visible to human eyes, reducing their effectiveness in real-world scenarios. To alleviate this issue, recent works propose to modify a limited number of input pixels to implement adversarial attacks. However, these approaches still have limitations in terms of both imperceptibility and efficiency. This paper proposes a novel plug-in framework called Sensitive Region-Aware Attack (SRA) to generate soft-label black-box adversarial examples using the sensitivity map and evolution strategies. First, a transferable black-box sensitivity map generation approach is proposed for identifying the sensitive regions of input images. To perform SRA with a limited amount of perturbed pixels, a dynamic l(0) and l(infinity) adjustment strategy is introduced. Furthermore, an adaptive evolution strategy is employed to optimize the selection of generated sensitive regions, allowing for the execution of effective and imperceptible attacks. Experimental results demonstrate that our SRA achieves an imperceptible soft-label black-box attack with a 96.43% success rate using less than 20% of the image pixels on ImageNet and a 100% success rate using 30% of the image pixels on CIFAR-10.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available