4.7 Article

Robust Local Preserving and Global Aligning Network for Adversarial Domain Adaptation

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2021.3112815

关键词

Noise measurement; Feature extraction; Training; Robustness; Adaptation models; Loss measurement; Adversarial machine learning; Wasserstein distance; unsupervised domain adaptation; noisy label; representation learning; adversarial learning

向作者/读者索取更多资源

This paper proposes a new method called RLPGA for unsupervised domain adaptation. RLPGA improves the robustness of label noise by learning a robust classifier and constructing adjacency weight matrices. Empirical studies demonstrate the effectiveness of RLPGA.
Unsupervised domain adaptation (UDA) requires source domain samples with clean ground truth labels during training. Accurately labeling a large number of source domain samples is time-consuming and laborious. An alternative is to utilize samples with noisy labels for training. However, training with noisy labels can greatly reduce the performance of UDA. In this paper, we address the problem that learning UDA models only with access to noisy labels and propose a novel method called robust local preserving and global aligning network (RLPGA). RLPGA improves the robustness of the label noise from two aspects. One is learning a classifier by a robust informative-theoretic-based loss function. The other is constructing two adjacency weight matrices and two negative weight matrices by the proposed local preserving module to preserve the local topology structures of input data. We conduct theoretical analysis on the robustness of the proposed RLPGA and prove that the robust informative-theoretic-based loss and the local preserving module are beneficial to reduce the empirical risk of the target domain. A series of empirical studies show the effectiveness of our proposed RLPGA.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据