3.8 Proceedings Paper

Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain Adaptation

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.00393

关键词

-

资金

  1. National Natural Science Foundation of China [61806039, 62073059, 61832001]
  2. Sichuan Science and Technology Program [2020YFG0080]

向作者/读者索取更多资源

This paper introduces a cross-domain gradient discrepancy minimization method to improve the accuracy of target samples by reducing the gradient discrepancy between source samples and target samples. Experimental results show that this method outperforms many previous state-of-the-arts on three widely used UDA datasets.
Unsupervised Domain Adaptation (UDA) aims to generalize the knowledge learned from a well-labeled source domain to an unlabled target domain. Recently, adversarial domain adaptation with two distinct classifiers (bi-classifier) has been introduced into UDA which is effective to align distributions between different domains. Previous bi-classifier adversarial learning methods only focus on the similarity between the outputs of two distinct classifiers. However, the similarity of the outputs cannot guarantee the accuracy of target samples, i.e., traget samples may match to wrong categories even if the discrepancy between two classifiers is small. To challenge this issue, in this paper, we propose a cross-domain gradient discrepancy minimization (CGDM) method which explicitly minimizes the discrepancy of gradients generated by source samples and target samples. Specifically, the gradient gives a cue for the semantic information of target samples so it can be used as a good supervision to improve the accuracy of target samples. In order to compute the gradient signal of target smaples, we further obtain target pseudo labels through a clustering-based self-supervised learning. Extensive experiments on three widely used UDA datasets show that our method surpasses many previous state-of-the-arts.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据