4.7 Article

Adversarial Examples: Opportunities and Challenges

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2019.2933524

关键词

Artificial intelligence; Biological neural networks; Neurons; Robots; Perturbation methods; Security; Training; Adversarial examples (AEs); artificial intelligence (AI); deep neural networks (DNNs)

资金

  1. National Natural Science Foundation of China [61874042, 61602107]
  2. Key Research and Development Program of Hunan Province [2019GK2082, 2018RS3041]
  3. Peng Cheng Laboratory Project of Guangdong Province [PCL2018KP004]
  4. Fundamental Research Funds for the Central Universities

向作者/读者索取更多资源

Deep neural networks (DNNs) have shown huge superiority over humans in image recognition, speech processing, autonomous vehicles, and medical diagnosis. However, recent studies indicate that DNNs are vulnerable to adversarial examples (AEs), which are designed by attackers to fool deep learning models. Different from real examples, AEs can mislead the model to predict incorrect outputs while hardly be distinguished by human eyes, therefore threaten security-critical deep-learning applications. In recent years, the generation and defense of AEs have become a research hotspot in the field of artificial intelligence (AI) security. This article reviews the latest research progress of AEs. First, we introduce the concept, cause, characteristics, and evaluation metrics of AEs, then give a survey on the state-of-the-art AE generation methods with the discussion of advantages and disadvantages. After that, we review the existing defenses and discuss their limitations. Finally, future research opportunities and challenges on AEs are prospected.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据