4.7 Article

CGN: Class gradient network for the construction of adversarial samples

Related references

Note: Only part of the references are listed.
Article Computer Science, Artificial Intelligence

Query efficient black-box adversarial attack on deep neural networks

Yang Bai et al.

Summary: This paper proposes a Neural Process based black-box adversarial attack (NP-Attack), which utilizes image structure information and surrogate models to significantly reduce the query counts in black-box settings.

PATTERN RECOGNITION (2023)

Article Computer Science, Artificial Intelligence

Meta-learning-based adversarial training for deep 3D face recognition on point clouds

Cuican Yu et al.

Summary: Recently, deep face recognition using 2D face images has advanced due to the availability of large-scale face data. However, deep face recognition using 3D face scans on point clouds still needs further exploration. This paper proposes a meta-learning-based adversarial training algorithm for deep 3D face recognition on point clouds. The algorithm combines adversarial sample generation and meta-learning-based network training to continuously generate diverse adversarial samples and improve the accuracy of the 3DFR model.

PATTERN RECOGNITION (2023)

Article Computer Science, Artificial Intelligence

Collaborative Learning with Unreliability Adaptation for Semi-Supervised Image Classification

Xiaoyang Huo et al.

Summary: This paper proposes a collaborative learning model, in which multiple networks learn collaboratively by adapting their predictions. By introducing adaptation modules and consistency regularization, the training performance and stability among networks can be improved.

PATTERN RECOGNITION (2023)

Article Computer Science, Information Systems

EGA-Net: Edge feature enhancement and global information attention network for RGB-D salient object detection

Longsheng Wei et al.

Summary: This study proposes a novel network, EGA-Net, to improve edge quality and highlight the main features of salient objects in 3D object detection. The network includes feature interaction and edge feature enhancement modules, as well as a global information guide integration module. The experimental results show that our method outperforms 19 other methods on multiple evaluation metrics.

INFORMATION SCIENCES (2023)

Article Computer Science, Information Systems

Noise-related face image recognition based on double dictionary transform learning

Mengmeng Liao et al.

Summary: A novel noise-related face image recognition method based on double dictionary transform learning (DDTL) is proposed in this paper. The method removes the redundant information and noise in the training images, making the learned dictionary more discriminative. It also introduces a linear regression term to enhance the differences between classes. Experimental results demonstrate that the proposed method outperforms existing methods.

INFORMATION SCIENCES (2023)

Article Computer Science, Information Systems

Sensitive region-aware black-box adversarial attacks

Chenhao Lin et al.

Summary: Recent research has shown that deep neural networks (DNNs) are vulnerable to perturbations in adversarial attacks. However, existing approaches generate global perturbations that are visible to human eyes, limiting their effectiveness in real-world scenarios. This paper proposes a new framework called Sensitive Region-Aware Attack (SRA) which generates imperceptible black-box adversarial examples by identifying sensitive regions and using evolution strategies. Experimental results demonstrate a high success rate of our SRA in achieving imperceptible black-box attacks using only a limited number of image pixels.

INFORMATION SCIENCES (2023)

Article Computer Science, Information Systems

Improving the invisibility of adversarial examples with perceptually adaptive perturbation

Yaoyuan Zhang et al.

Summary: This paper proposes the Perceptual Sensitive Attack (PS Attack) to address the vulnerability of deep neural networks to adversarial examples. By incorporating the Just Noticeable Difference (JND) matrix and human perceptual constraints, PS Attack generates imperceptible adversarial perturbations. Furthermore, PS Attack mitigates the tradeoff between imperceptibility and transferability of adversarial images. Experimental results demonstrate that combining PS attacks with state-of-the-art black-box approaches significantly enhances the naturalness of adversarial examples.

INFORMATION SCIENCES (2023)

Article Computer Science, Artificial Intelligence

Collaborative Learning with Unreliability Adaptation for Semi-Supervised Image Classification

Xiaoyang Huo et al.

Summary: This paper proposes a collaborative learning model in which multiple networks learn collaboratively by adapting their predictions. This co-adaptation mechanism enhances the training effect of image classification.

PATTERN RECOGNITION (2023)

Article Computer Science, Artificial Intelligence

Deep neural networks-based relevant latent representation learning for hyperspectral image classification

Akrem Sellami et al.

Summary: The study introduces a novel methodology for hyperspectral image classification using multi-view deep neural networks that combines spectral and spatial features to enhance classification performance with limited labeled samples.

PATTERN RECOGNITION (2022)

Article Computer Science, Information Systems

A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions

Teng Long et al.

Summary: This paper reviews classical and latest representative adversarial attacks and analyzes the subject development in this field using knowledge graph and visualization techniques. The study shows that deep learning is vulnerable to adversarial attacks, indicating the need for future research directions.

COMPUTERS & SECURITY (2022)

Article Computer Science, Information Systems

Compound adversarial examples in deep neural networks q

Yanchun Li et al.

Summary: This paper introduces a method for generating compound adversarial examples that combines perturbation and patch attack modes. The experiments demonstrate that compound attack can improve the generative efficiency of adversarial examples and achieve higher attack success rate with fewer iteration steps. The compound adversarial examples also successfully attack defensive mechanisms that were previously able to defend against perturbation or patch attacks.

INFORMATION SCIENCES (2022)

Article Computer Science, Artificial Intelligence

Query-efficient decision-based attack via sampling distribution reshaping

Xuxiang Sun et al.

Summary: This paper introduces a normal vector estimation framework SDR for high-dimensional decision-based attacks through reshaping sampling distribution, which is incorporated into a general geometric attack framework. Experimental evaluations show that SDR can achieve competitive l(p) norms, indicating its significance in enhancing attack performance.

PATTERN RECOGNITION (2022)

Article Engineering, Electrical & Electronic

Multi-Scale Metric Learning for Few-Shot Learning

Wen Jiang et al.

Summary: This paper proposes a novel few-shot learning method called multi-scale metric learning (MSML) to tackle the classification problem in few-shot learning by extracting multi-scale features and learning multi-scale relationships. The method introduces a feature pyramid structure and a multi-scale relation generation network, and optimizes the deep network with the intra-class and inter-class relation loss, achieving superior performance in experimental results on mini ImageNet and tiered ImageNet.

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY (2021)

Article Engineering, Electrical & Electronic

Multi-Source Adversarial Sample Attack on Autonomous Vehicles

Zuobin Xiong et al.

Summary: Deep learning performs well in object detection and classification for autonomous vehicles, but is vulnerable to adversarial samples. Two multi-source adversarial sample attack models have been proposed to effectively break down the perception systems of autonomous vehicles.

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY (2021)

Article Computer Science, Information Systems

Towards a physical-world adversarial patch for blinding object detection models

Yajie Wang et al.

Summary: The paper introduces a novel adversarial patch attack that makes specific objects invisible to object detection models, demonstrating high transferability across different architectures and datasets. Additionally, the attack successfully fools several state-of-the-art object detection models and illustrates vulnerability in both digital and physical worlds.

INFORMATION SCIENCES (2021)

Article Engineering, Multidisciplinary

Adversarial Attacks and Defenses in Deep Learning

Kui Ren et al.

ENGINEERING (2020)

Article Geochemistry & Geophysics

Transfer Learning for SAR Image Classification via Deep Joint Distribution Adaptation Networks

Jie Geng et al.

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING (2020)

Article Computer Science, Hardware & Architecture

Generative Adversarial Networks

Ian Goodfellow et al.

COMMUNICATIONS OF THE ACM (2020)

Article Computer Science, Artificial Intelligence

Adversarial Examples: Attacks and Defenses for Deep Learning

Xiaoyong Yu et al.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2019)

Article Computer Science, Hardware & Architecture

ImageNet Classification with Deep Convolutional Neural Networks

Alex Krizhevsky et al.

COMMUNICATIONS OF THE ACM (2017)