4.7 Article

Improving the invisibility of adversarial examples with perceptually adaptive perturbation

Journal

INFORMATION SCIENCES
Volume 635, Issue -, Pages 126-137

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2023.03.139

Keywords

Adversarial examples; Just noticeable difference; Deep neural networks; Image classification; Perceptually adaptive

Ask authors/readers for more resources

This paper proposes the Perceptual Sensitive Attack (PS Attack) to address the vulnerability of deep neural networks to adversarial examples. By incorporating the Just Noticeable Difference (JND) matrix and human perceptual constraints, PS Attack generates imperceptible adversarial perturbations. Furthermore, PS Attack mitigates the tradeoff between imperceptibility and transferability of adversarial images. Experimental results demonstrate that combining PS attacks with state-of-the-art black-box approaches significantly enhances the naturalness of adversarial examples.
Deep neural networks (DNNs) are vulnerable to adversarial examples generated by adding subtle perturbations to benign inputs. While these perturbations are somewhat small due to the L.. norm constraint, they are still easily spotted by human eyes. This paper proposes Perceptual Sensitive Attack (PS Attack) to address this flaw with a perceptually adaptive scheme. We add Just Noticeable Difference (JND) as prior information into adversarial attacks, making image changes in areas that are insensitive to the human eyes. By integrating the JND matrix into the L-p norm, PS Attack projects perturbations onto the JND space around clean data, resulting in more imperceivable adversarial perturbations. PS Attack also mitigates the tradeoff between the imperceptibility and transferability of adversarial images by adjusting a visual coefficient. Extensive experiments manifest that combining PS attacks with state-of-the-art black-box approaches can significantly promote the naturalness of adversarial examples while maintaining their attack ability. Compared to the state-of-the-art transferable attacks, our attacks reduce LPIPS by 8% on average when attacking typically-trained and defense models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available