4.7 Article

Attention-Based Multi-Source Domain Adaptation

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 30, Issue -, Pages 3793-3803

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2021.3065254

Keywords

Correlation; Adaptation models; Feature extraction; Target recognition; Data models; Transfer learning; Visualization; Multi-source domain adaptation; attention-based multi-source domain adaptation; weighted moment distance

Funding

  1. National Key Research and Development Program of China [2018AAA0102205]
  2. National Natural Science Foundation of China [61902399, 61721004, U1836220, U1705262, 61832002, 61720106006]
  3. Beijing Natural Science Foundation [L201001]
  4. Key Research Program of Frontier Sciences, CAS [QYZDJ-SSW-JSC039]

Ask authors/readers for more resources

ABMSDA mitigates the negative effects caused by dissimilar domains by utilizing attention mechanism and domain correlations, leading to improved performance in the target domain.
Multi-source domain adaptation (MSDA) aims to transfer knowledge from multi-source domains to one target domain. Inspired by single-source domain adaptation, existing methods solve MSDA by aligning the data distributions between the target domain and each source domain. However, aligning the target domain with the dissimilar source domain would harm the representation learning. To address the above issue, an intuitive motivation of MSDA is using the attention mechanism to enhance the positive effects of the similar domains, and suppress the negative effects of the dissimilar domains. Therefore, we propose Attention-Based Multi-Source Domain Adaptation (ABMSDA) by considering the domain correlations to alleviate the effects caused by dissimilar domains. To obtain the domain correlations between source and target domains, ABMSDA firstly trains a domain recognition model to calculate the probability that the target images belong to each source domain. Based on the domain correlations, Weighted Moment Distance (WMD) is proposed to pay more attention on the source domains with higher similarities. Furthermore, Attentive Classification Loss (ACL) is developed to constrain that the feature extractor can generate the alignment and discriminative visual representations. The evaluations on two benchmarks demonstrate the effectiveness of the proposed model, e.g., an average of 6.1% improvement on the challenging DomainNet dataset.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available