4.5 Article

A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions

Journal

COMPUTERS & SECURITY
Volume 121, Issue -, Pages -

Publisher

ELSEVIER ADVANCED TECHNOLOGY
DOI: 10.1016/j.cose.2022.102847

Keywords

Deep learning; Adversarial attack; Black-box attack; White-box attack; Robustness; Visualization analysis

Funding

  1. National Natural Science Foundation of China [62002332, 62072443]

Ask authors/readers for more resources

This paper reviews classical and latest representative adversarial attacks and analyzes the subject development in this field using knowledge graph and visualization techniques. The study shows that deep learning is vulnerable to adversarial attacks, indicating the need for future research directions.
Deep learning has been widely applied in various fields such as computer vision, natural language processing, and data mining. Although deep learning has achieved significant success in solving complex problems, it has been shown that deep neural networks are vulnerable to adversarial attacks, resulting in models that fail to perform their tasks properly, which limits the application of deep learning in security-critical areas. In this paper, we first review some of the classical and latest representative adversarial attacks based on a reasonable taxonomy of adversarial attacks. Then, we construct a knowledge graph based on the citation relationship relying on the software VOSviewer, visualize and analyze the subject development in this field based on the information of 5923 articles from Scopus. In the end, possible research directions for the development about adversarial attacks are proposed based on the trends deduced by keywords detection analysis. All the data used for visualization are available at: https://github.com/NanyunLengmu/Adversarial-Attack-Visualization. (C) 2022 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available