4.7 Article

Frequency-based methods for improving the imperceptibility and transferability of adversarial examples

Journal

APPLIED SOFT COMPUTING
Volume 150, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.asoc.2023.111088

Keywords

Adversarial attack; Frequency information; Normal projection; Frequency spectrum diversity transformation; Frequency dropout

Ask authors/readers for more resources

This paper proposes an adversarial attack method based on frequency information, which optimizes the imperceptibility and transferability of adversarial examples in white-box and black-box scenarios respectively. Experimental results validate the superiority of the proposed method and its application in real-world online model evaluation reveals their vulnerability.
The adversarial attack is a popular technology to evaluate the robustness of deep learning models. However, adversarial examples crafted by current methods often have poor imperceptibility and low transferability, hindering the utility of attacks in practice. In this paper, we creatively leverage the frequency information to promote the imperceptibility and adversarial transferability in the white-box scenario and black-box scenario, respectively. Specifically, in the white-box scenario, we adopt the low-frequency constraint and normal projection to improve the imperceptibility of the adversarial example without reducing the attack performance. In the black-box scenario, we propose an effective Frequency Spectrum Diversity Transformation (FSDT) to address the issue of overfitting to the substitute model. FSDT enriches the input with a diverse set of unfamiliar information, significantly improving the transferability of adversarial attacks. Towards those defended target models in the black-box scenario, we also design a gradient refinement technology named Frequency Dropout (FD) to discard some useless components of gradients in the frequency domain, which can further mitigate the protective effect of defense mechanisms. Plentiful experiments forcefully validate the superiority of our proposed methods. Furthermore, we apply the proposed method to evaluate the robustness of real-world online models and discover their vulnerability. Finally, we analyze why imperceptibility and adversarial transferability are hard to improve concurrently from the view of frequency. Our codes are available at https://github.com/RYC-98/FSD-MIM-and-NPGA.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available