4.7 Article

Stochastic Gradient Perturbation: An Implicit Regularizer for Person Re-Identification

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2023.3261333

关键词

Perturbation methods; Stochastic processes; Data models; Training; Optimization; Semantics; Regularization; person ReID; generalization; adversarial robustness

向作者/读者索取更多资源

This paper presents a regularizer for person re-identification models inspired by Adversarial Training. A novel implicit regularizer, named Stochastic Gradient Perturbation (SGP), is proposed to improve the diversity of perturbation, reduce computational cost, and overcome the optimization dilemma between adversarial robustness and accuracy. The experiments demonstrate that SGP achieves powerful performances in both generalization and adversarial robustness for person re-identification.
Generalization of the person re-identification (ReID) model plays an important role in practical application, and we discuss a simple yet effective regularizer to improve it inspired by Adversarial Training (AT). AT has been indicated as an advanced regularizer due to its adversarial mechanism, ability to mine hard samples, and nature of data augmentation. However, serving as an augmentation-based regularizer, AT shows low diversity of the perturbation, excessive computational cost, and the optimization dilemma between adversarial robustness and accuracy for ReID task, and is thus suboptimal. To tackle these limitations and get a more effective regularizer for ReID, we rethink the nature of AT and unveil that the adversarial data augmentation is essentially reflected by gradients. Based on this, a novel implicit regularizer, named Stochastic Gradient Perturbation (SGP), is proposed, which naturally brings three merits: 1) Better diversity of the perturbation due to the proposed non-directional stochastic perturbations rather than directional adversarial perturbations. 2) Lower computational cost due to the proposed implicit gradient augmentation rather than explicitly additional data. 3) The optimization dilemma of the adversarial robustness and generalization is naturally overcome since SGP contains the adversarial gradient perturbation. Further, we put forward a perspective that the generalization and adversarial robustness may have an inter unity. Experiments on the baseline and SOTA models demonstrate powerful performances of the plugged-played SGP, and both generalization and adversarial robustness can be guaranteed.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据