4.6 Article

Generating adversarial examples without specifying a target model

Journal

PEERJ COMPUTER SCIENCE
Volume 7, Issue -, Pages -

Publisher

PEERJ INC
DOI: 10.7717/peerj-cs.702

Keywords

Deep learning; Adversarial example; Generative adversarial networks; Adversarial machine learning

Funding

  1. National Natural Science Foundation of China [61572034]
  2. Major Science and Technology Projects in Anhui Province [18030901025]
  3. Anhui Province University Natural Science Fund [KJ2019A0109]
  4. Natural Science Foundation of Anhui Province of China [2008085MF220]
  5. Science and Technology Project of Wuhu City [2020yf48]

Ask authors/readers for more resources

Adversarial examples pose a security threat to deep learning models, and the proposed Attack Without a Target Model (AWTM) method achieves high attack success rate with low time cost.
Adversarial examples are regarded as a security threat to deep learning models, and there are many ways to generate them. However, most existing methods require the query authority of the target during their work. In a more practical situation, the attacker will be easily detected because of too many queries, and this problem is especially obvious under the black-box setting. To solve the problem, we propose the Attack Without a Target Model (AWTM). Our algorithm does not specify any target model in generating adversarial examples, so it does not need to query the target. Experimental results show that it achieved a maximum attack success rate of 81.78% in the MNIST data set and 87.99% in the CIFAR-10 data set. In addition, it has a low time cost because it is a GAN-based method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available