4.6 Article

Adversarial Learning With Cost-Sensitive Classes

期刊

IEEE TRANSACTIONS ON CYBERNETICS
卷 53, 期 8, 页码 4855-4866

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2022.3146388

关键词

Training; Robustness; Costs; Adversarial machine learning; Perturbation methods; Task analysis; Neural networks; Adversarial examples; adversarial training; attack and defense; cost sensitive; robustness

向作者/读者索取更多资源

This article proposes a framework that combines cost-sensitive classification and adversarial learning to protect special classes from attacks. A new defense model is built based on the Min-Max property and random distribution analysis, which performs better when facing attacks.
It is necessary to improve the performance of some special classes or to particularly protect them from attacks in adversarial learning. This article proposes a framework combining cost-sensitive classification and adversarial learning together to train a model that can distinguish between the protected and unprotected classes, such that the protected classes are less vulnerable to adversarial examples. We find in this framework an interesting phenomenon during the training of deep neural networks, called the Min-Max property, that is, the absolute values of most parameters in the convolutional layer approach 0 while the absolute values of a few parameters are significantly larger, becoming bigger. Based on this Min-Max property which is formulated and analyzed in a view of random distribution, we further build a new defense model against adversarial examples for adversarial robustness improvement. An advantage of the built model is that it performs better than the standard one and can combine with adversarial training to achieve improved performance. It is experimentally confirmed that, regarding the average accuracy of all classes, our model is almost as same as the existing models when an attack does not occur and is better than the existing models when an attack occurs. Specifically, regarding the accuracy of protected classes, the proposed model is much better than the existing models when an attack occurs.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据