4.7 Article

On Single-Model Transferable Targeted Attacks: A Closer Look at Decision-Level Optimization

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 32, Issue -, Pages 2972-2984

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2023.3276331

Keywords

Optimization; Adversarial machine learning; Closed box; Sun; Measurement; Linear programming; Tuning; Adversarial attacks; decision-level attack; adaptive optimization scheme; balanced logit loss

Ask authors/readers for more resources

Known as a challenging problem, scholars have been paying much attention to single-model transferable targeted attacks via decision-level optimization objectives. Recent works have focused on designing new optimization objectives, while in this paper, we investigate the intrinsic problems in three commonly adopted objectives and propose two simple yet effective methods to mitigate them. The proposed methods, including the unified Adversarial Optimization Scheme (AOS) and Balanced Logit Loss (BLL), show considerable improvements in targeted transferability across various attack frameworks and datasets.
Known as a hard nut, the single-model transferable targeted attacks via decision-level optimization objectives have attracted much attention among scholars for a long time. On this topic, recent works devoted themselves to designing new optimization objectives. In contrast, we take a closer look at the intrinsic problems in three commonly adopted optimization objectives, and propose two simple yet effective methods in this paper to mitigate these intrinsic problems. Specifically, inspired by the basic idea of adversarial learning, we, for the first time, propose a unified Adversarial Optimization Scheme (AOS) to release both the problems of gradient vanishing in cross-entropy loss and gradient amplification in Po+Trip loss, and indicate that our AOS, a simple transformation on the output logits before passing them to the objective functions, can yield considerable improvements on the targeted transferability. Besides, we make a further clarification on the preliminary conjecture in Vanilla Logit Loss (VLL) and point out the problem of unbalanced optimization in VLL, in which the source logit may risk getting increased without the explicit suppression on it, leading to the low transferability. Then, the Balanced Logit Loss (BLL) is further proposed, where we take both the source logit and the target logit into account. Comprehensive validations witness the compatibility and the effectiveness of the proposed methods across most attack frameworks, and their effectiveness can also span two tough cases (i.e., the low-ranked transfer scenario and the transfer to defense methods) and three datasets (i.e., the ImageNet, CIFAR-10, and CIFAR-100). Our source code is available at https://github.com/xuxiangsun/DLLTTAA.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available