4.7 Article

Relationship Between Nonsmoothness in Adversarial Training, Constraints of Attacks, and Flatness in the Input Space

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2023.3244172

Keywords

Adversarial robustness; adversarial training (AT); deep neural network (DNN); optimization

Ask authors/readers for more resources

Adversarial training is a method to improve against adversarial attacks, but it still lags behind standard training in practical performance. Our analysis reveals that the non-smoothness of the loss function in adversarial training is caused by the constraint of adversarial attacks, which is dependent on the type of constraint. Furthermore, we find that a flatter loss surface in the input space corresponds to a less smooth adversarial loss surface in the parameter space. We demonstrate that smooth adversarial loss achieved through EntropySGD improves the performance of adversarial training.
Adversarial training (AT) is a promising method to improve the robustness against adversarial attacks. However, its performance is not still satisfactory in practice compared with standard training. To reveal the cause of the difficulty of AT, we analyze the smoothness of the loss function in AT, which determines the training performance. We reveal that nonsmoothness is caused by the constraint of adversarial attacks and depends on the type of constraint. Specifically, the L-infinity constraint can cause nonsmoothness more than the L-2 constraint. In addition, we found an interesting property for AT: the flatter loss surface in the input space tends to have the less smooth adversarial loss surface in the parameter space. To confirm that the nonsmoothness causes the poor performance of AT, we theoretically and experimentally show that smooth adversarial loss by EntropySGD (EnSGD) improves the performance of AT.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available