3.8 Proceedings Paper

Generation of Low Distortion Adversarial Attacks via Convex Programming

出版社

IEEE
DOI: 10.1109/ICDM.2019.00195

关键词

Deep neural networks; adversarial attack; convex programming

资金

  1. National Science Foundation [CAREER CMMI-1750531, ECCS1609916, CNS-1739748, CNS-1704662]

向作者/读者索取更多资源

As deep neural networks (DNNs) achieve extraordinary performance in a wide range of tasks, testing their robustness under adversarial attacks becomes paramount. Adversarial attacks, also known as adversarial examples, are used to measure the robustness of DNNs and are generated by incorporating imperceptible perturbations into the input data with the intention of altering a DNN's classification. In prior work in this area, most of the proposed optimization based methods employ gradient descent to find adversarial examples. In this paper, we present an innovative method which generates adversarial examples via convex programming. Our experiment results demonstrate that we can generate adversarial examples with lower distortion and higher transferability than the C&W attack, which is the current state-of-the-art adversarial attack method for DNNs. We achieve 100% attack success rate on both the original undefended models and the adversarially-trained models. Our distortions of the L8 attack are respectively 31% and 18% lower than the C&W attack for the best case and average case on the CIFAR-10 data set.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据