4.7 Article

Deep Subdomain Adaptation Network for Image Classification

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2020.2988928

Keywords

Task analysis; Adaptation models; Kernel; Feature extraction; Learning systems; Semantics; Training; Domain adaptation; fine grained; subdomain

Funding

  1. National Key Research and Development Program of China [2018YFB1004300]
  2. National Natural Science Foundation of China [U1836206, U1811461, 61773361]
  3. Project of Youth Innovation Promotion Association CAS [2017146]

Ask authors/readers for more resources

This study introduces a deep subdomain adaptation network (DSAN) that aligns relevant subdomain distributions across different domains based on the local maximum mean discrepancy (LMMD). DSAN is simple but effective, does not require adversarial training, and converges quickly. It can be easily integrated into feedforward network models to achieve efficient adaptation via backpropagation.
For a target task where the labeled data are unavailable, domain adaptation can transfer a learner from a different source domain. Previous deep domain adaptation methods mainly learn a global domain shift, i.e., align the global source and target distributions without considering the relationships between two subdomains within the same category of different domains, leading to unsatisfying transfer learning performance without capturing the fine-grained information. Recently, more and more researchers pay attention to subdomain adaptation that focuses on accurately aligning the distributions of the relevant subdomains. However, most of them are adversarial methods that contain several loss functions and converge slowly. Based on this, we present a deep subdomain adaptation network (DSAN) that learns a transfer network by aligning the relevant subdomain distributions of domain-specific layer activations across different domains based on a local maximum mean discrepancy (LMMD). Our DSAN is very simple but effective, which does not need adversarial training and converges fast. The adaptation can be achieved easily with most feedforward network models by extending them with LMMD loss, which can be trained efficiently via backpropagation. Experiments demonstrate that DSAN can achieve remarkable results on both object recognition tasks and digit classification tasks. Our code will be available at https://github.com/easezyc/deep-transfer-learning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available