4.7 Article

Unsupervised domain adaptation via distilled discriminative clustering

期刊

PATTERN RECOGNITION
卷 127, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2022.108638

关键词

Deep learning; Unsupervised domain adaptation; Image classification; Knowledge distillation; Deep discriminative clustering; Implicit domain alignment

资金

  1. National Natural Science Foundation of China [61771201]
  2. Program for Guangdong Introducing Innovative and Enterpreneurial Teams [2017ZT07X183]
  3. Guangdong R&D key project of China [2019B010155001]

向作者/读者索取更多资源

Unsupervised domain adaptation tackles the issue of classifying data in an unlabeled target domain while having labeled source domain data. This paper introduces a novel method called DisClusterDA, which formulates the domain adaptation problem as discriminative clustering and utilizes source data for joint training. Experimental results demonstrate that DisClusterDA outperforms existing methods on several benchmark datasets.
Unsupervised domain adaptation addresses the problem of classifying data in an unlabeled target domain, given labeled source domain data that share a common label space but follow a different distribution. Most of the recent methods take the approach of explicitly aligning feature distributions between the two domains. Differently, motivated by the fundamental assumption for domain adaptability, we recast the domain adaptation problem as discriminative clustering of target data, given strong privileged information provided by the closely related, labeled source data. Technically, we use clustering objectives based on a robust variant of entropy minimization that adaptively filters target data, a soft Fisher-like criterion, and additionally the cluster ordering via centroid classification. To distill discriminative source information for target clustering, we propose to jointly train the network using parallel, supervised learning objectives over labeled source data. We term our method of distilled discriminative clustering for domain adaptation as DisClusterDA. We also give geometric intuition that illustrates how constituent objectives of DisClusterDA help learn class-wisely pure, compact feature distributions. We conduct careful ablation studies and extensive experiments on five popular benchmark datasets, including a multi-source domain adaptation one. Based on commonly used backbone networks, DisClusterDA outperforms existing methods on these benchmarks. It is also interesting to observe that in our DisClusterDA framework, adding an additional loss term that explicitly learns to align class-level feature distributions across domains does harm to the adaptation performance, though more careful studies in different algorithmic frameworks are to be conducted. (c) 2022 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据