4.6 Article

Deep Ladder-Suppression Network for Unsupervised Domain Adaptation

期刊

IEEE TRANSACTIONS ON CYBERNETICS
卷 52, 期 10, 页码 10735-10749

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2021.3065247

关键词

Measurement; Task analysis; Image reconstruction; Decoding; Feature extraction; Neural networks; Electronic mail; Autoencoder; deep convolutional neural network (DCNN); deep learning; domain adaptation (DA); transfer learning; unsupervised learning

资金

  1. National Natural Science Foundation of China [61872379]
  2. Hunan Science and Technology Plan Project [2019GK2131]
  3. Hunan Provincial Natural Science Foundation of China [2018JJ3613]
  4. Academy of Finland [331883]
  5. Academy of Finland (AKA) [331883] Funding Source: Academy of Finland (AKA)

向作者/读者索取更多资源

Unsupervised domain adaptation (UDA) involves learning a classifier for an unlabeled target domain by transferring knowledge from a labeled source domain with a related but different distribution. The deep ladder-suppression network (DLSN) proposed in this study is designed to better learn cross-domain shared content by suppressing domain-specific variations. Experimental results demonstrate that the DLSN consistently and significantly improves the performance of various popular UDA frameworks.
Unsupervised domain adaptation (UDA) aims at learning a classifier for an unlabeled target domain by transferring knowledge from a labeled source domain with a related but different distribution. Most existing approaches learn domain-invariant features by adapting the entire information of the images. However, forcing adaptation of domain-specific variations undermines the effectiveness of the learned features. To address this problem, we propose a novel, yet elegant module, called the deep ladder-suppression network (DLSN), which is designed to better learn the cross-domain shared content by suppressing domain-specific variations. Our proposed DLSN is an autoencoder with lateral connections from the encoder to the decoder. By this design, the domain-specific details, which are only necessary for reconstructing the unlabeled target data, are directly fed to the decoder to complete the reconstruction task, relieving the pressure of learning domain-specific variations at the later layers of the shared encoder. As a result, DLSN allows the shared encoder to focus on learning cross-domain shared content and ignores the domain-specific variations. Notably, the proposed DLSN can be used as a standard module to be integrated with various existing UDA frameworks to further boost performance. Without whistles and bells, extensive experimental results on four gold-standard domain adaptation datasets, for example: 1) Digits; 2) Office31; 3) Office-Home; and 4) VisDA-C, demonstrate that the proposed DLSN can consistently and significantly improve the performance of various popular UDA frameworks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据