4.5 Article

A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions

期刊

COMPUTERS & SECURITY
卷 121, 期 -, 页码 -

出版社

ELSEVIER ADVANCED TECHNOLOGY
DOI: 10.1016/j.cose.2022.102847

关键词

Deep learning; Adversarial attack; Black-box attack; White-box attack; Robustness; Visualization analysis

资金

  1. National Natural Science Foundation of China [62002332, 62072443]

向作者/读者索取更多资源

This paper reviews classical and latest representative adversarial attacks and analyzes the subject development in this field using knowledge graph and visualization techniques. The study shows that deep learning is vulnerable to adversarial attacks, indicating the need for future research directions.
Deep learning has been widely applied in various fields such as computer vision, natural language processing, and data mining. Although deep learning has achieved significant success in solving complex problems, it has been shown that deep neural networks are vulnerable to adversarial attacks, resulting in models that fail to perform their tasks properly, which limits the application of deep learning in security-critical areas. In this paper, we first review some of the classical and latest representative adversarial attacks based on a reasonable taxonomy of adversarial attacks. Then, we construct a knowledge graph based on the citation relationship relying on the software VOSviewer, visualize and analyze the subject development in this field based on the information of 5923 articles from Scopus. In the end, possible research directions for the development about adversarial attacks are proposed based on the trends deduced by keywords detection analysis. All the data used for visualization are available at: https://github.com/NanyunLengmu/Adversarial-Attack-Visualization. (C) 2022 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据