4.7 Article

Relationship Between Nonsmoothness in Adversarial Training, Constraints of Attacks, and Flatness in the Input Space

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2023.3244172

关键词

Adversarial robustness; adversarial training (AT); deep neural network (DNN); optimization

向作者/读者索取更多资源

Adversarial training is a method to improve against adversarial attacks, but it still lags behind standard training in practical performance. Our analysis reveals that the non-smoothness of the loss function in adversarial training is caused by the constraint of adversarial attacks, which is dependent on the type of constraint. Furthermore, we find that a flatter loss surface in the input space corresponds to a less smooth adversarial loss surface in the parameter space. We demonstrate that smooth adversarial loss achieved through EntropySGD improves the performance of adversarial training.
Adversarial training (AT) is a promising method to improve the robustness against adversarial attacks. However, its performance is not still satisfactory in practice compared with standard training. To reveal the cause of the difficulty of AT, we analyze the smoothness of the loss function in AT, which determines the training performance. We reveal that nonsmoothness is caused by the constraint of adversarial attacks and depends on the type of constraint. Specifically, the L-infinity constraint can cause nonsmoothness more than the L-2 constraint. In addition, we found an interesting property for AT: the flatter loss surface in the input space tends to have the less smooth adversarial loss surface in the parameter space. To confirm that the nonsmoothness causes the poor performance of AT, we theoretically and experimentally show that smooth adversarial loss by EntropySGD (EnSGD) improves the performance of AT.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据