4.6 Article

Self-corrected unsupervised domain adaptation

期刊

FRONTIERS OF COMPUTER SCIENCE
卷 16, 期 5, 页码 -

出版社

HIGHER EDUCATION PRESS
DOI: 10.1007/s11704-021-1010-8

关键词

unsupervised domain adaptation; adversarial earning; deep neural network; pseudo-labels; label corrector

资金

  1. National Natural Science Foundation of China [61876091, 61772284]
  2. China Postdoctoral Science Foundation [2019M651918]
  3. Open Foundation of MIIT Key Laboratory of Pattern Analysis and Machine Intelligence

向作者/读者索取更多资源

This paper introduces a self-corrected unsupervised domain adaptation method called SCUDA, which uses a probabilistic label corrector to learn and correct target labels directly. Unlike traditional UDA methods, SCUDA attempts to learn the target prediction end to end.
Unsupervised domain adaptation (UDA), which aims to use knowledge from a label-rich source domain to help learn unlabeled target domain, has recently attracted much attention. UDA methods mainly concentrate on source classification and distribution alignment between domains to expect the correct target prediction. While in this paper, we attempt to learn the target prediction end to end directly, and develop a Self-corrected unsupervised domain adaptation (SCUDA) method with probabilistic label correction. SCUDA adopts a probabilistic label corrector to learn and correct the target labels directly. Specifically, besides model parameters, those target pseudo-labels are also updated in learning and corrected by the anchor-variable, which preserves the class candidates for samples. Experiments on real datasets show the competitiveness of SCUDA.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据