4.8 Article

Aggregating Randomized Clustering-Promoting Invariant Projections for Domain Adaptation

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2018.2832198

关键词

Unsupervised domain adaptation; domain-invaraint projection; class-clustering; sampling-and-fusion

资金

  1. State Key Development Program [2016YFB1001001]
  2. National Natural Science Foundation of China [61622310, 61473289]

向作者/读者索取更多资源

Unsupervised domain adaptation aims to leverage the labeled source data to learn with the unlabeled target data. Previous trandusctive methods tackle it by iteratively seeking a low-dimensional projection to extract the invariant features and obtaining the pseudo target labels via building a classifier on source data. However, they merely concentrate on minimizing the cross-domain distribution divergence, while ignoring the intra-domain structure especially for the target domain. Even after projection, possible risk factors like imbalanced data distribution may still hinder the performance of target label inference. In this paper, we propose a simple yet effective domain-invariant projection ensemble approach to tackle these two issues together. Specifically, we seek the optimal projection via a novel relaxed domain-irrelevant clustering-promoting term that jointly bridges the cross-domain semantic gap and increases the intra-class compactness in both domains. To further enhance the target label inference, we first develop a sampling-and-fusion' framework, under which multiple projections are independently learned based on various randomized coupled domain subsets. Subsequently, aggregating models such as majority voting are utilized to leverage multiple projections and classify unlabeled target data. Extensive experimental results on six visual benchmarks including object, face, and digit images. demonstrate that the proposed methods gain remarkable margins over state-of-the-art unsupervised domain adaptation methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据