4.7 Article

On Single-Model Transferable Targeted Attacks: A Closer Look at Decision-Level Optimization

相关参考文献

注意:仅列出部分参考文献,下载原文获取全部文献信息。
Article Computer Science, Artificial Intelligence

Query-Efficient Black-Box Adversarial Attack With Customized Iteration and Sampling

Yucheng Shi et al.

Summary: This study proposes a new framework for query-efficient black-box adversarial attack by combining transfer-based and decision-based attacks. The framework analyzes the relationship between current noise and variance of sampling, the monotonicity of noise compression, and the influence of transition function on the decision-based attack. Based on this framework, the Customized Iteration and Sampling Attack (CISA) algorithm is proposed, which demonstrates advantages in query efficiency of black-box adversarial attacks through extensive experiments.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2023)

Article Computer Science, Information Systems

Class attention network for image recognition

Gong Cheng et al.

Summary: This paper proposes an attention-based image recognition method that uses class-specific dictionary learning to improve discrimination abilities. Experimental results demonstrate the effectiveness of the method on multiple visual recognition tasks.

SCIENCE CHINA-INFORMATION SCIENCES (2023)

Article Computer Science, Artificial Intelligence

Base and Meta: A New Perspective on Few-Shot Segmentation

Chunbo Lang et al.

Summary: This paper proposes a fresh and powerful scheme called BAM to tackle the issue of low generalization capability in most previous works when dealing with hard query samples. The scheme combines an auxiliary branch and a meta learner to identify regions that do not need segmentation and derive accurate segmentation predictions by adaptively integrating the results of the two learners.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (2023)

Article Geochemistry & Geophysics

Threatening Patch Attacks on Object Detection in Optical Remote Sensing Images

Xuxiang Sun et al.

Summary: This study focuses on advanced patch attacks (PAs) on object detection in optical remote sensing images (O-RSIs) and proposes a more threatening patch attack (TPA) that does not sacrifice visual quality. The study addresses the inconsistency between local and global landscapes in existing patch selection schemes by leveraging the first-order difference (FOD) of the objective function before and after masking. It also introduces an IoU-based objective function called bounding box (Bbox) drifting loss (BDL) to address the problem of gradient inundation when applying existing coordinate-based loss (CBL) to PAs.

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING (2023)

Article Geochemistry & Geophysics

Perturbation-Seeking Generative Adversarial Networks: A Defense Framework for Remote Sensing Image Scene Classification

Gong Cheng et al.

Summary: The article introduced an effective defense framework PSGAN for RSI scene classification, which trains the classifier by generating examples to combat known and unknown attacks. Experimental results demonstrated the great effectiveness of PSGAN.

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING (2022)

Article Computer Science, Artificial Intelligence

Query-efficient decision-based attack via sampling distribution reshaping

Xuxiang Sun et al.

Summary: This paper introduces a normal vector estimation framework SDR for high-dimensional decision-based attacks through reshaping sampling distribution, which is incorporated into a general geometric attack framework. Experimental evaluations show that SDR can achieve competitive l(p) norms, indicating its significance in enhancing attack performance.

PATTERN RECOGNITION (2022)

Proceedings Paper Computer Science, Artificial Intelligence

Exploring Effective Data for Surrogate Training Towards Black-box Attack

Xuxiang Sun et al.

Summary: This paper proposes a method for training a surrogate model for black-box adversarial attack, which enlarges inter-class similarity and enhances intra-class diversity to improve the training effectiveness of the surrogate model, and leverages proxy data for training. The experimental results demonstrate the effectiveness of this method.

2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) (2022)

Article Geochemistry & Geophysics

Scattering Model Guided Adversarial Examples for SAR Target Recognition: Attack and Defense

Bowen Peng et al.

Summary: Research has shown that deep neural networks are highly vulnerable to adversarial perturbations in SAR automatic target recognition, leading to the proposal of a novel adversarial attack algorithm to enhance DNN robustness. This algorithm is able to generate more robust adversarial scatterers and effectively construct a defensive model.

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING (2022)

Proceedings Paper Computer Science, Artificial Intelligence

Feature Importance-aware Transferable Adversarial Attacks

Zhibo Wang et al.

Summary: Transferability of adversarial examples is crucial for attacking unknown models, and existing transferable attacks tend to degrade prediction accuracy in source models without considering intrinsic object features. The Feature Importance-aware Attack (FIA) disrupts important object-aware features to achieve stronger transferability by introducing aggregate gradient-based feature importance. FIA demonstrates superior performance compared to state-of-the-art transferable attacks, improving success rates against normally trained and defense models.

2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) (2021)

Proceedings Paper Computer Science, Artificial Intelligence

On Generating Transferable Targeted Perturbations

Muzammal Naseer et al.

Summary: This paper introduces a new generative approach for highly transferable targeted perturbations, which outperforms existing methods by matching the global distribution and local neighborhood structure between source and target images. The proposed method achieves high targeted transferability rates independent of domain labels and performs well against state-of-the-art methods in various attack settings.

2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Admix: Enhancing the Transferability of Adversarial Attacks

Xiaosen Wang et al.

Summary: In this study, a new input transformation based attack method called Admix is proposed, which generates more transferable adversaries by considering a set of images randomly sampled from other categories in addition to the input image. Empirical evaluations show that Admix outperforms existing input transformation methods in achieving better transferability on the standard ImageNet dataset, and can further improve attack performance when combined with existing methods.

2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink

Ranjie Duan et al.

Summary: This study introduces a novel attack method called Adversarial Laser Beam (AdvLB) that manipulates physical parameters of laser beams to deceive DNNs, demonstrating its effectiveness in both digital- and physical-settings. The proposed laser beam attack may result in interesting prediction errors of state-of-the-art DNNs.

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Simulating Unknown Target Models for Query-Efficient Black-box Attacks

Chen Ma et al.

Summary: This study introduces a method to train a generalized substitute model called "Simulator" that can mimic the functionality of any unknown target model. By building training data with multiple tasks and using knowledge distillation loss for meta-learning, it reduces query complexity significantly.

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 (2021)

Proceedings Paper Computer Science, Artificial Intelligence

DAPAS : Denoising Autoencoder to Prevent Adversarial attack in Semantic Segmentation

Seungju Cho et al.

2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) (2020)

Proceedings Paper Computer Science, Artificial Intelligence

Towards Transferable Targeted Attack

Maosen Li et al.

2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) (2020)

Article Computer Science, Artificial Intelligence

Learning Rotation-Invariant and Fisher Discriminative Convolutional Neural Networks for Object Detection

Gong Cheng et al.

IEEE TRANSACTIONS ON IMAGE PROCESSING (2019)

Proceedings Paper Computer Science, Artificial Intelligence

Feature Space Perturbations Yield More Transferable Adversarial Examples

Nathan Inkawhich et al.

2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) (2019)

Proceedings Paper Computer Science, Artificial Intelligence

Densely Connected Convolutional Networks

Gao Huang et al.

30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) (2017)

Proceedings Paper Computer Science, Information Systems

Towards Evaluating the Robustness of Neural Networks

Nicholas Carlini et al.

2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP) (2017)

Article Computer Science, Artificial Intelligence

ImageNet Large Scale Visual Recognition Challenge

Olga Russakovsky et al.

INTERNATIONAL JOURNAL OF COMPUTER VISION (2015)