3.8 Proceedings Paper

Adversarial Robustness under Long-Tailed Distribution

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.00855

关键词

-

资金

  1. GRF [14203518, ITS/431/18FX]
  2. CUHK [TS1712093]
  3. NTU NAP
  4. Shanghai Committee of Science and Technology, China [20DZ1100800]
  5. A*STAR through the Industry Alignment Fund -Industry Collaboration Projects Grant

向作者/读者索取更多资源

This study investigates adversarial vulnerability and defense mechanisms under long-tailed distributions, revealing the negative impacts of imbalanced data and proposing a new framework RoBal with two dedicated modules to enhance adversarial robustness. Experiment results show the superiority of the proposed approach over state-of-the-art defense methods, making it a significant step towards real-world robustness.
Adversarial robustness has attracted extensive studies recently by revealing the vulnerability and intrinsic characteristics of deep networks. However; existing works on adversarial robustness mainly focus on balanced datasets, while real-world data usually exhibits a long-tailed distribution. To push adversarial robustness towards more realistic scenarios, in this work we investigate the adversarial vulnerability as well as defense under long-tailed distributions. In particular, we first reveal the negative impacts induced by imbalanced data on both recognition performance and adversarial robustness, uncovering the intrinsic challenges of this problem. We then perform a systematic study on existing long-tailed recognition methods in conjunction with the adversarial training framework. Several valuable observations are obtained: 1) natural accuracy is relatively easy to improve, 2) fake gain of robust accuracy exists under unreliable evaluation, and 3) boundary error limits the promotion of robustness. Inspired by these observations, we propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant classifier and data re-balancing via both margin engineering at training stage and boundary adjustment during inference. Extensive experiments demonstrate the superiority of our approach over other state-of-the-art defense methods. To our best knowledge, we are the first to tackle adversarial robustness under long-tailed distributions, which we believe would be a significant step towards real-world robustness. Our code is available at: https://github. com/wutong16/Adversarial_Long-Tai1.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据