4.7 Article

Wild patterns: Ten years after the rise of adversarial machine learning

期刊

PATTERN RECOGNITION
卷 84, 期 -, 页码 317-331

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2018.07.023

关键词

Adversarial machine learning; Evasion attacks; Poisoning attacks; Adversarial examples; Secure learning; Deep learning

资金

  1. EU H2020 project ALOHA under the European Union's Horizon 2020 research and innovation programme [780788]

向作者/读者索取更多资源

Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms. (C) 2018 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据