4.7 Article

Distributionally robust unsupervised domain adaptation

Journal

Publisher

ELSEVIER
DOI: 10.1016/j.cam.2023.115369

Keywords

Domain adaptation; Distributionally robust learning; Unsupervised learning; Deep learning

Ask authors/readers for more resources

In this study, a distributionally robust unsupervised domain adaptation (DRUDA) method is proposed to enhance the generalization ability of machine learning models under input space perturbations. The DRUDA approach optimizes worst-case perturbations of the training source data to reduce the shifts in joint distributions across domains, leading to improved domain adaptation accuracy on target domains.
Obtaining ground-truth label information from real-world data along with uncertainty quantification can be difficult or even infeasible. In the absence of labeled data for a certain task, unsupervised domain adaptation (UDA) techniques have shown great accomplishment by learning transferable knowledge from labeled source domain data and applying it to unlabeled target domain data, yet scarce studies consider addressing uncertainties under domain shifts to improve the model robustness. Distributionally robust learning (DRL) is emerging as a high-potential technique for building reliable learning systems that are robust to distribution shifts. In this study, we propose a distributionally robust unsupervised domain adaptation (DRUDA) method to enhance the machine learning model generalization ability under input space perturbations. The DRL-based UDA learning scheme is formulated as a min-max optimization problem by optimizing worst-case perturbations of the training source data. Our Wasserstein distributionally robust framework can reduce the shifts in the joint distributions across domains. The proposed DRUDA has been tested on digit datasets and the Office -31 dataset, compared with other state-of-the-art domain adaptation techniques. Our experimental results show that the proposed DRUDA leads to improvements in domain adaptation accuracy performance on target domains. & COPY; 2023 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available