4.5 Article

Boosting adversarial attacks with future momentum and future transformation

Journal

COMPUTERS & SECURITY
Volume 127, Issue -, Pages -

Publisher

ELSEVIER ADVANCED TECHNOLOGY
DOI: 10.1016/j.cose.2023.103124

Keywords

Adversarial examples; Black-box attacks; Transferability; Future momentum; Future transformation

Ask authors/readers for more resources

This study proposes a future momentum and future transformation (FMFT) method to enhance the transferability of adversarial examples under the black-box attack setting. The FMFT method incorporates future momentum (FM) and future transformation (FT), where FM updates adversarial examples with future N-th step momentum and FT utilizes input transformations to obtain a more robust gradient and reduce computation overhead. The study also introduces a new input transformation called random block scaling. Empirical evaluations on the ImageNet dataset demonstrate the superiority of the FMFT method.
The transferability of adversarial examples under the black-box attack setting has attracted extensive attention from the community. Advanced optimization algorithms are one of the most successful ways to improve the transferability among all methods proposed recently. However, existing advanced optimization algorithms either can only slightly enhance the transferability of adversarial examples or take a large amount of computation time. We propose the future momentum and future transformation (FMFT) method to balance the transferability and computation overhead. The FMFT method incorporates two parts, future momentum (FM) and future transformation (FT). FM is inspired by the looking ahead property and updates adversarial examples with the future N-th step momentum during each iteration. FT, on the other hand, makes use of the input transformations during the progress of future momentum calculation to obtain a more robust gradient and reduce computation overhead. Additionally, we propose a new input transformation called random block scaling, which divides the image into various blocks and then scales the blocks differently. Em pirical evaluations on the standard ImageNet dataset demonstrate the superiority of our FMFT.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available