4.7 Article

Multi-Source Contribution Learning for Domain Adaptation

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2021.3069982

Keywords

Feature extraction; Task analysis; Transfer learning; Learning systems; Training; Adaptation models; Visualization; Classification; deep learning; domain adaptation; transfer learning

Funding

  1. Australian Research Council [FL190100149]
  2. Australian Research Council [FL190100149] Funding Source: Australian Research Council

Ask authors/readers for more resources

Transfer learning techniques leverage knowledge from similar domains to tackle tasks in a target domain. The proposed method in this article simultaneously learns similarities and diversities of domains to improve the transferability of latent features, aiming at enhancing the performance of the final target predictor.
Transfer learning becomes an attractive technology to tackle a task from a target domain by leveraging previously acquired knowledge from a similar domain (source domain). Many existing transfer learning methods focus on learning one discriminator with single-source domain. Sometimes, knowledge from single-source domain might not be enough for predicting the target task. Thus, multiple source domains carrying richer transferable information are considered to complete the target task. Although there are some previous studies dealing with multi-source domain adaptation, these methods commonly combine source predictions by averaging source performances. Different source domains contain different transferable information; they may contribute differently to a target domain compared with each other. Hence, the source contribution should be taken into account when predicting a target task. In this article, we propose a novel multi-source contribution learning method for domain adaptation (MSCLDA). As proposed, the similarities and diversities of domains are learned simultaneously by extracting multi-view features. One view represents common features (similarities) among all domains. Other views represent different characteristics (diversities) in a target domain; each characteristic is expressed by features extracted in a source domain. Then multi-level distribution matching is employed to improve the transferability of latent features, aiming to reduce misclassification of boundary samples by maximizing discrepancy between different classes and minimizing discrepancy between the same classes. Concurrently, when completing a target task by combining source predictions, instead of averaging source predictions or weighting sources using normalized similarities, the original weights learned by normalizing similarities between source and target domains are adjusted using pseudo target labels to increase the disparities of weight values, which is desired to improve the performance of the final target predictor if the predictions of sources exist significant difference. Experiments on real-world visual data sets demonstrate the superiorities of our proposed method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available