期刊
2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019)
卷 -, 期 -, 页码 1486-1491出版社
IEEE
DOI: 10.1109/ICDM.2019.00195
关键词
Deep neural networks; adversarial attack; convex programming
资金
- National Science Foundation [CAREER CMMI-1750531, ECCS1609916, CNS-1739748, CNS-1704662]
As deep neural networks (DNNs) achieve extraordinary performance in a wide range of tasks, testing their robustness under adversarial attacks becomes paramount. Adversarial attacks, also known as adversarial examples, are used to measure the robustness of DNNs and are generated by incorporating imperceptible perturbations into the input data with the intention of altering a DNN's classification. In prior work in this area, most of the proposed optimization based methods employ gradient descent to find adversarial examples. In this paper, we present an innovative method which generates adversarial examples via convex programming. Our experiment results demonstrate that we can generate adversarial examples with lower distortion and higher transferability than the C&W attack, which is the current state-of-the-art adversarial attack method for DNNs. We achieve 100% attack success rate on both the original undefended models and the adversarially-trained models. Our distortions of the L8 attack are respectively 31% and 18% lower than the C&W attack for the best case and average case on the CIFAR-10 data set.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据