4.6 Article

Adversarial examples generated from sample subspace

Journal

COMPUTER STANDARDS & INTERFACES
Volume 82, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.csi.2022.103634

Keywords

Adversarial examples; PCA; Defense; Deep learning

Funding

  1. National Natural Science Foundation of China [61966011, 62002074, 62102107]

Ask authors/readers for more resources

This paper studies the nature of attacks from adversarial samples from the perspective of the main and minor features, finding that deep learning models mainly learn the main features and proposing a method to generate adversarial samples in the sample subspace.
Attacks and defenses have been a hot issue in the field of deep learning. Because of the vulnerability of deep learning models, it is easy to attack them by adversarial examples. Adversarial examples are carefully added perturbations to fool the deep learning model. For such samples, humans are able to classify correctly, while deep learning models are prone to give a high confidence false output. The working mechanism of adversarial samples has always been a difficult point and focus of research. In this work, the nature of the attack from the adversarial samples is studied accordingly from the perspective of the main and minor features of PCA (principal component analysis). Firstly, we find that the deep learning model mainly learns the main features of the data, rather than the minor features. Secondly, we discover that perturbations on the main features are more likely to lead to misclassification of the deep learning model, while perturbations on the minor features have little effect. Finally, we propose a method to generate adversarial samples in the sample subspace. Experimental results on both MNIST and CIFAR10 show that the proposed method generates smaller adversarial sample perturbations, which are more difficult to detect by human eyes and more aggressive against white-box attacks on deep learning models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available