4.7 Article

Joint Clustering and Discriminative Feature Alignment for Unsupervised Domain Adaptation

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 30, Issue -, Pages 7842-7855

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2021.3109530

Keywords

Feature extraction; Task analysis; Image reconstruction; Training; Image coding; Deep learning; Data mining; Domain adaptation; deep learning; transfer learning; unsupervised learning; semisupervised learning

Funding

  1. Academy of Finland [331883]
  2. National Natural Science Foundation of China [61872379, 62022091, 71701205]
  3. Academy of Finland (AKA) [331883] Funding Source: Academy of Finland (AKA)

Ask authors/readers for more resources

The Joint Clustering and Discriminative Feature Alignment (JCDFA) approach proposed in this paper aims to simultaneously mine discriminative features of target data and align cross-domain discriminative features to enhance performance in Unsupervised Domain Adaptation (UDA). The method integrates supervised classification of labeled source data and discriminative clustering of unlabeled target data, as well as optimizing supervised contrastive learning and conditional Maximum Mean Discrepancy (MMD) for feature alignment. Experimental results on real-world benchmarks demonstrate the superiority of JCDFA over state-of-the-art domain adaptation methods.
Unsupervised Domain Adaptation (UDA) aims to learn a classifier for the unlabeled target domain by leveraging knowledge from a labeled source domain with a different but related distribution. Many existing approaches typically learn a domain-invariant representation space by directly matching the marginal distributions of the two domains. However, they ignore exploring the underlying discriminative features of the target data and align the cross-domain discriminative features, which may lead to suboptimal performance. To tackle these two issues simultaneously, this paper presents a Joint Clustering and Discriminative Feature Alignment (JCDFA) approach for UDA, which is capable of naturally unifying the mining of discriminative features and the alignment of class-discriminative features into one single framework. Specifically, in order to mine the intrinsic discriminative information of the unlabeled target data, JCDFA jointly learns a shared encoding representation for two tasks: supervised classification of labeled source data, and discriminative clustering of unlabeled target data, where the classification of the source domain can guide the clustering learning of the target domain to locate the object category. We then conduct the cross-domain discriminative feature alignment by separately optimizing two new metrics: 1) an extended supervised contrastive learning, i.e., semi-supervised contrastive learning 2) an extended Maximum Mean Discrepancy (MMD), i.e., conditional MMD, explicitly minimizing the intra-class dispersion and maximizing the inter-class compactness. When these two procedures, i.e., discriminative features mining and alignment are integrated into one framework, they tend to benefit from each other to enhance the final performance from a cooperative learning perspective. Experiments are conducted on four real-world benchmarks (e.g., Office-31, ImageCLEF-DA, Office-Home and VisDA-C). All the results demonstrate that our JCDFA can obtain remarkable margins over state-of-the-art domain adaptation methods. Comprehensive ablation studies also verify the importance of each key component of our proposed algorithm and the effectiveness of combining two learning strategies into a framework.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available