4.7 Review

A Review of Single-Source Deep Unsupervised Visual Domain Adaptation

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2020.3028503

关键词

Task analysis; Data models; Adaptation models; Visualization; Training; Loss measurement; Learning systems; Adversarial learning; discrepancy-based methods; domain adaptation (DA); self-supervised learning (SSL); transfer learning

资金

  1. Berkeley DeepDrive

向作者/读者索取更多资源

This article reviews the latest single-source deep unsupervised domain adaptation (DA) methods for visual tasks and discusses new perspectives for future research. The article starts with the definitions of different DA strategies and descriptions of existing benchmark datasets, then summarizes and compares different categories of methods, and finally discusses future research directions.
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks. However, in many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data. To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain. Unfortunately, direct transfer across domains often performs poorly due to the presence of domain shift or dataset bias. Domain adaptation (DA) is a machine learning paradigm that aims to learn a model from a source domain that can perform well on a different (but related) target domain. In this article, we review the latest single-source deep unsupervised DA methods focused on visual tasks and discuss new perspectives for future research. We begin with the definitions of different DA strategies and the descriptions of existing benchmark datasets. We then summarize and compare different categories of single-source unsupervised DA methods, including discrepancy-based methods, adversarial discriminative methods, adversarial generative methods, and self-supervision-based methods. Finally, we discuss future research directions with challenges and possible solutions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据