4.7 Article

Frequency-based methods for improving the imperceptibility and transferability of adversarial examples

Related references

Note: Only part of the references are listed.
Article Computer Science, Artificial Intelligence

Frequency domain regularization for iterative adversarial attacks

Tengjiao Li et al.

Summary: Adversarial examples have gained increasing attention and the transferability of such examples is crucial for black-box attacks. To enhance transferability and prevent overfitting, this study proposes a regularization constraint for inputs in adversarial attacks. By exploiting the consistency between the outputs of convolutional neural networks and low frequencies of inputs, a frequency domain regularization is constructed. Experimental results on ImageNet demonstrate the superiority of the proposed method, achieving significant improvements in attack success rate compared to other attacks and defense methods.

PATTERN RECOGNITION (2023)

Article Computer Science, Artificial Intelligence

Boosting transferability of targeted adversarial examples with non-robust feature alignment

Hegui Zhu et al.

Summary: Deep learning networks can be deceived by processing noise into the image. We propose an efficient Non-robust Feature Alignment targeted adversarial attack method called NFAA, which generates targeted adversarial examples by filtering out original class features and adding target class features simultaneously. Experimental results show that NFAA can effectively transfer targeted adversarial examples to models with various architectures.

EXPERT SYSTEMS WITH APPLICATIONS (2023)

Article Computer Science, Information Systems

Crafting transferable adversarial examples via contaminating the salient feature variance

Yuchen Ren et al.

Summary: Adversarial attacks are important in the development of deep learning techniques as they evaluate the robustness of deep neural networks and explore their decision mechanism. Feature-level attacks have been proposed to contaminate internal feature maps and produce transferable adversarial examples. In this paper, an ingenious Salient Feature Variance Attack (SFVA) is proposed, which addresses neglected problems and achieves state-of-the-art performance. Experiments confirm the superiority of SFVA on the ImageNet dataset, highlighting the serious security threats faced by real-world deployed models.

INFORMATION SCIENCES (2023)

Article Computer Science, Artificial Intelligence

Bayesian evolutionary optimization for crafting high-quality adversarial examples with limited query budget

Chao Li et al.

Summary: Due to the importance of security, adversarial attacks in deep learning, especially the black-box adversarial attack, which mimics real-world scenarios, have gained popularity. Query-based methods are commonly used for black-box attacks but suffer from needing excessive queries. To overcome this, a Bayesian evolutionary optimization (BEO) based black-box attack method using differential evolution is proposed, employing Gaussian processes model and adaptive acquisition functions. Experimental results show that this method can effectively generate high-quality adversarial examples using only 200 queries.

APPLIED SOFT COMPUTING (2023)

Article Computer Science, Artificial Intelligence

One evolutionary algorithm deceives humans and ten convolutional neural networks trained on ImageNet at image recognition

Ali Osman Topal et al.

Summary: In this paper, an evolutionary algorithm (EA)-based adversarial attack against CNNs is proposed, which aims to generate adversarial images that achieve a high confidence probability of being classified into the target category (at least 75%) and appear indistinguishable to the human eye in a black-box setting.

APPLIED SOFT COMPUTING (2023)

Article Computer Science, Artificial Intelligence

MISPSO-Attack: An efficient adversarial watermarking attack based on multiple initial solution particle swarm optimization☆

Xianyu Zuo et al.

Summary: This paper proposes a new method for adversarial attacks through watermarking to protect individuals' privacy. The experimental results demonstrate the high attack success rate and deception performance of the method on computer vision datasets and face recognition models. Furthermore, the study on the natural causes of adversarial samples provides valuable insights for developing more robust deep models.

APPLIED SOFT COMPUTING (2023)

Article Computer Science, Artificial Intelligence

Short-term power load forecasting system based on rough set, information granule and multi-objective optimization

Jianzhou Wang et al.

Summary: Accurately forecasting power load is crucial for utilities to manage resources, reduce costs, and improve customer service. This study proposes a novel combined forecasting system that integrates rough sets, information granulation, deep learning, and multi-objective optimization to improve load prediction accuracy. Simulation experiments demonstrate the effectiveness of the system in predicting load trend changes and fluctuation ranges.

APPLIED SOFT COMPUTING (2023)

Proceedings Paper Computer Science, Artificial Intelligence

Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity

Cheng Luo et al.

Summary: This article introduces a new adversarial attack algorithm that deceives classifiers by attacking semantic similarity in feature representations, and it can generate misleading and transferable adversarial examples across different datasets and architectures. Additionally, the proposed algorithm can generate perturbations that are more imperceptible than existing methods.

2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) (2022)

Proceedings Paper Computer Science, Artificial Intelligence

Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability

Yifeng Xiong et al.

Summary: The black-box adversarial attack is a practical tool in deep learning security that is used to attack target models without accessing their network architecture or internal weights. In this paper, a novel attack method called SVRE is proposed, which reduces the gradient variance of ensemble models to improve attack effectiveness. Empirical results on ImageNet dataset show promising performance of the proposed method.

2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) (2022)

Proceedings Paper Computer Science, Artificial Intelligence

Improving Adversarial Transferability via Neuron Attribution-based Attacks

Jianping Zhang et al.

Summary: This paper proposes the Neuron Attribution-based Attack (NAA), which conducts feature-level attacks with more accurate neuron importance estimations. Extensive experiments confirm the superiority of our approach to the state-of-the-art benchmarks.

2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) (2022)

Article Computer Science, Artificial Intelligence

A low-query black-box adversarial attack based on transferability

Kangyi Ding et al.

Summary: The study introduces a low-query black-box adversarial attack method, which combines optimization-based and transfer-based methods to improve the black-box attack with fewer queries, higher success rate, and lower distortion. Experimental results demonstrate that the method can achieve a black-box attack success rate of over 98.5% on MNIST, CIFAR-10, and ImageNet with specific distortion and fewer queries compared to other state-of-the-art methods.

KNOWLEDGE-BASED SYSTEMS (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Feature Importance-aware Transferable Adversarial Attacks

Zhibo Wang et al.

Summary: Transferability of adversarial examples is crucial for attacking unknown models, and existing transferable attacks tend to degrade prediction accuracy in source models without considering intrinsic object features. The Feature Importance-aware Attack (FIA) disrupts important object-aware features to achieve stronger transferability by introducing aggregate gradient-based feature importance. FIA demonstrates superior performance compared to state-of-the-art transferable attacks, improving success rates against normally trained and defense models.

2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) (2021)

Proceedings Paper Computer Science, Artificial Intelligence

AdvDrop: Adversarial Attack to DNNs by Dropping Information

Ranjie Duan et al.

Summary: In the study of adversarial attacks, the authors propose a new method called AdvDrop, which creates adversarial examples by dropping existing information of images, making it difficult to be defended by current defense systems.

2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Admix: Enhancing the Transferability of Adversarial Attacks

Xiaosen Wang et al.

Summary: In this study, a new input transformation based attack method called Admix is proposed, which generates more transferable adversaries by considering a set of images randomly sampled from other categories in addition to the input image. Empirical evaluations show that Admix outperforms existing input transformation methods in achieving better transferability on the standard ImageNet dataset, and can further improve attack performance when combined with existing methods.

2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) (2021)

Article Computer Science, Artificial Intelligence

WaveCNet: Wavelet Integrated CNNs to Suppress Aliasing Effect for Noise-Robust Image Classification

Qiufu Li et al.

Summary: By integrating discrete wavelet transform (DWT) with convolutional neural networks (CNN), the proposed approach enhances the noise robustness and adversarial robustness, achieving higher accuracy in image classification tasks. Decomposing feature maps using DWT into low-frequency and high-frequency components, the model generates robust high-level features and eliminates data noises, leading to improved performance on ImageNet dataset and adversarial attacks.

IEEE TRANSACTIONS ON IMAGE PROCESSING (2021)

Proceedings Paper Computer Science, Artificial Intelligence

Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance

Zhengyu Zhao et al.

2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) (2020)

Article Computer Science, Artificial Intelligence

SAR Image segmentation based on convolutional-wavelet neural network and markov random field

Yiping Duan et al.

PATTERN RECOGNITION (2017)

Proceedings Paper Computer Science, Information Systems

Towards Evaluating the Robustness of Neural Networks

Nicholas Carlini et al.

2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP) (2017)

Proceedings Paper Computer Science, Artificial Intelligence

Fast R-CNN

Ross Girshick

2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) (2015)

Article Computer Science, Artificial Intelligence

Image information and visual quality

HR Sheikh et al.

IEEE TRANSACTIONS ON IMAGE PROCESSING (2006)

Article Computer Science, Artificial Intelligence

Image quality assessment: From error visibility to structural similarity

Z Wang et al.

IEEE TRANSACTIONS ON IMAGE PROCESSING (2004)

Article Engineering, Electrical & Electronic

A universal image quality index

Z Wang et al.

IEEE SIGNAL PROCESSING LETTERS (2002)