4.8 Article

Divergence-Agnostic Unsupervised Domain Adaptation by Adversarial Attacks

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3109287

关键词

Adaptation models; Training; Feature extraction; Measurement; Data models; Neural networks; Semantics; Unsupervised domain adaptation; transfer learning; adversarial attacks; domain generalization; model adaptation

资金

  1. National Natural Science Foundation of China [61806039, 62176042, 62073059]
  2. Sichuan Science and Technology Program [2020YFG0080, 2020Y FG0481]
  3. CCF-Tencent Open Fund [RAGR20210107]
  4. CCF-Baidu Open Fund [2021PP15002000]

向作者/读者索取更多资源

This paper discusses the generalization problem of conventional machine learning algorithms when dealing with data from different distributions and proposes an unsupervised domain adaptation method. The authors consider the scenario where either the source domain data or the target domain data is unknown, and investigate how to handle the divergence-agnostic adaptive learning problem from the perspective of adversarial attacks. Experimental results show that the proposed method outperforms previous ones.
Conventional machine learning algorithms suffer the problem that the model trained on existing data fails to generalize well to the data sampled from other distributions. To tackle this issue, unsupervised domain adaptation (UDA) transfers the knowledge learned from a well-labeled source domain to a different but related target domain where labeled data is unavailable. The majority of existing UDA methods assume that data from the source domain and the target domain are available and complete during training. Thus, the divergence between the two domains can be formulated and minimized. In this paper, we consider a more practical yet challenging UDA setting where either the source domain data or the target domain data are unknown. Conventional UDA methods would fail this setting since the domain divergence is agnostic due to the absence of the source data or the target data. Technically, we investigate UDA from a novel view-adversarial attack-and tackle the divergence-agnostic adaptive learning problem in a unified framework. Specifically, we first report the motivation of our approach by investigating the inherent relationship between UDA and adversarial attacks. Then we elaborately design adversarial examples to attack the training model and harness these adversarial examples. We argue that the generalization ability of the model would be significantly improved if it can defend against our attack, so as to improve the performance on the target domain. Theoretically, we analyze the generalization bound for our method based on domain adaptation theories. Extensive experimental results on multiple UDA benchmarks under conventional, source-absent and target-absent UDA settings verify that our method is able to achieve a favorable performance compared with previous ones. Notably, this work extends the scope of both domain adaptation and adversarial attack, and expected to inspire more ideas in the community.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据