4.6 Article

Adversarial attacks and defenses in deep learning for image recognition: A survey

Journal

NEUROCOMPUTING
Volume 514, Issue -, Pages 162-181

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2022.09.004

Keywords

Deep neural network; Adversarial attack; Adversarial defense; Robustness

Ask authors/readers for more resources

This paper provides a comprehensive survey of recent advances in adversarial attack and defense methods. It analyzes and compares the pros and cons of various schemes, and discusses the main challenges and future research directions in this field.
In recent years, researches on adversarial attacks and defense mechanisms have obtained much attention. It's observed that adversarial examples crafted with small malicious perturbations would mislead the deep neural network (DNN) model to output wrong prediction results. These small perturbations are imperceptible to humans. The existence of adversarial examples poses great threat to the robustness of DNN-based models. It is necessary to study the principles behind it and develop their countermea-sures. This paper surveys and summarizes the recent advances in attack and defense methods extensively and in detail, analyzes and compares the pros and cons of various attack and defense schemes. Finally we discuss the main challenges and future research directions in this field.(c) 2022 Published by Elsevier B.V.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available