4.6 Article

Adversarial attacks and defenses in deep learning for image recognition: A survey

期刊

NEUROCOMPUTING
卷 514, 期 -, 页码 162-181

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2022.09.004

关键词

Deep neural network; Adversarial attack; Adversarial defense; Robustness

向作者/读者索取更多资源

This paper provides a comprehensive survey of recent advances in adversarial attack and defense methods. It analyzes and compares the pros and cons of various schemes, and discusses the main challenges and future research directions in this field.
In recent years, researches on adversarial attacks and defense mechanisms have obtained much attention. It's observed that adversarial examples crafted with small malicious perturbations would mislead the deep neural network (DNN) model to output wrong prediction results. These small perturbations are imperceptible to humans. The existence of adversarial examples poses great threat to the robustness of DNN-based models. It is necessary to study the principles behind it and develop their countermea-sures. This paper surveys and summarizes the recent advances in attack and defense methods extensively and in detail, analyzes and compares the pros and cons of various attack and defense schemes. Finally we discuss the main challenges and future research directions in this field.(c) 2022 Published by Elsevier B.V.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据