期刊
IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 32, 期 -, 页码 2972-2984出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2023.3276331
关键词
Optimization; Adversarial machine learning; Closed box; Sun; Measurement; Linear programming; Tuning; Adversarial attacks; decision-level attack; adaptive optimization scheme; balanced logit loss
Known as a challenging problem, scholars have been paying much attention to single-model transferable targeted attacks via decision-level optimization objectives. Recent works have focused on designing new optimization objectives, while in this paper, we investigate the intrinsic problems in three commonly adopted objectives and propose two simple yet effective methods to mitigate them. The proposed methods, including the unified Adversarial Optimization Scheme (AOS) and Balanced Logit Loss (BLL), show considerable improvements in targeted transferability across various attack frameworks and datasets.
Known as a hard nut, the single-model transferable targeted attacks via decision-level optimization objectives have attracted much attention among scholars for a long time. On this topic, recent works devoted themselves to designing new optimization objectives. In contrast, we take a closer look at the intrinsic problems in three commonly adopted optimization objectives, and propose two simple yet effective methods in this paper to mitigate these intrinsic problems. Specifically, inspired by the basic idea of adversarial learning, we, for the first time, propose a unified Adversarial Optimization Scheme (AOS) to release both the problems of gradient vanishing in cross-entropy loss and gradient amplification in Po+Trip loss, and indicate that our AOS, a simple transformation on the output logits before passing them to the objective functions, can yield considerable improvements on the targeted transferability. Besides, we make a further clarification on the preliminary conjecture in Vanilla Logit Loss (VLL) and point out the problem of unbalanced optimization in VLL, in which the source logit may risk getting increased without the explicit suppression on it, leading to the low transferability. Then, the Balanced Logit Loss (BLL) is further proposed, where we take both the source logit and the target logit into account. Comprehensive validations witness the compatibility and the effectiveness of the proposed methods across most attack frameworks, and their effectiveness can also span two tough cases (i.e., the low-ranked transfer scenario and the transfer to defense methods) and three datasets (i.e., the ImageNet, CIFAR-10, and CIFAR-100). Our source code is available at https://github.com/xuxiangsun/DLLTTAA.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据