4.7 Article

Unsupervised domain adaptation via distilled discriminative clustering

Journal

PATTERN RECOGNITION
Volume 127, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2022.108638

Keywords

Deep learning; Unsupervised domain adaptation; Image classification; Knowledge distillation; Deep discriminative clustering; Implicit domain alignment

Funding

  1. National Natural Science Foundation of China [61771201]
  2. Program for Guangdong Introducing Innovative and Enterpreneurial Teams [2017ZT07X183]
  3. Guangdong R&D key project of China [2019B010155001]

Ask authors/readers for more resources

Unsupervised domain adaptation tackles the issue of classifying data in an unlabeled target domain while having labeled source domain data. This paper introduces a novel method called DisClusterDA, which formulates the domain adaptation problem as discriminative clustering and utilizes source data for joint training. Experimental results demonstrate that DisClusterDA outperforms existing methods on several benchmark datasets.
Unsupervised domain adaptation addresses the problem of classifying data in an unlabeled target domain, given labeled source domain data that share a common label space but follow a different distribution. Most of the recent methods take the approach of explicitly aligning feature distributions between the two domains. Differently, motivated by the fundamental assumption for domain adaptability, we recast the domain adaptation problem as discriminative clustering of target data, given strong privileged information provided by the closely related, labeled source data. Technically, we use clustering objectives based on a robust variant of entropy minimization that adaptively filters target data, a soft Fisher-like criterion, and additionally the cluster ordering via centroid classification. To distill discriminative source information for target clustering, we propose to jointly train the network using parallel, supervised learning objectives over labeled source data. We term our method of distilled discriminative clustering for domain adaptation as DisClusterDA. We also give geometric intuition that illustrates how constituent objectives of DisClusterDA help learn class-wisely pure, compact feature distributions. We conduct careful ablation studies and extensive experiments on five popular benchmark datasets, including a multi-source domain adaptation one. Based on commonly used backbone networks, DisClusterDA outperforms existing methods on these benchmarks. It is also interesting to observe that in our DisClusterDA framework, adding an additional loss term that explicitly learns to align class-level feature distributions across domains does harm to the adaptation performance, though more careful studies in different algorithmic frameworks are to be conducted. (c) 2022 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available