Journal
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
Volume 31, Issue 7, Pages 2578-2593Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2019.2933524
Keywords
Artificial intelligence; Biological neural networks; Neurons; Robots; Perturbation methods; Security; Training; Adversarial examples (AEs); artificial intelligence (AI); deep neural networks (DNNs)
Categories
Funding
- National Natural Science Foundation of China [61874042, 61602107]
- Key Research and Development Program of Hunan Province [2019GK2082, 2018RS3041]
- Peng Cheng Laboratory Project of Guangdong Province [PCL2018KP004]
- Fundamental Research Funds for the Central Universities
Ask authors/readers for more resources
Deep neural networks (DNNs) have shown huge superiority over humans in image recognition, speech processing, autonomous vehicles, and medical diagnosis. However, recent studies indicate that DNNs are vulnerable to adversarial examples (AEs), which are designed by attackers to fool deep learning models. Different from real examples, AEs can mislead the model to predict incorrect outputs while hardly be distinguished by human eyes, therefore threaten security-critical deep-learning applications. In recent years, the generation and defense of AEs have become a research hotspot in the field of artificial intelligence (AI) security. This article reviews the latest research progress of AEs. First, we introduce the concept, cause, characteristics, and evaluation metrics of AEs, then give a survey on the state-of-the-art AE generation methods with the discussion of advantages and disadvantages. After that, we review the existing defenses and discuss their limitations. Finally, future research opportunities and challenges on AEs are prospected.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available