Journal
FRONTIERS OF COMPUTER SCIENCE
Volume 16, Issue 5, Pages -Publisher
HIGHER EDUCATION PRESS
DOI: 10.1007/s11704-021-1010-8
Keywords
unsupervised domain adaptation; adversarial earning; deep neural network; pseudo-labels; label corrector
Categories
Funding
- National Natural Science Foundation of China [61876091, 61772284]
- China Postdoctoral Science Foundation [2019M651918]
- Open Foundation of MIIT Key Laboratory of Pattern Analysis and Machine Intelligence
Ask authors/readers for more resources
This paper introduces a self-corrected unsupervised domain adaptation method called SCUDA, which uses a probabilistic label corrector to learn and correct target labels directly. Unlike traditional UDA methods, SCUDA attempts to learn the target prediction end to end.
Unsupervised domain adaptation (UDA), which aims to use knowledge from a label-rich source domain to help learn unlabeled target domain, has recently attracted much attention. UDA methods mainly concentrate on source classification and distribution alignment between domains to expect the correct target prediction. While in this paper, we attempt to learn the target prediction end to end directly, and develop a Self-corrected unsupervised domain adaptation (SCUDA) method with probabilistic label correction. SCUDA adopts a probabilistic label corrector to learn and correct the target labels directly. Specifically, besides model parameters, those target pseudo-labels are also updated in learning and corrected by the anchor-variable, which preserves the class candidates for samples. Experiments on real datasets show the competitiveness of SCUDA.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available