4.7 Article

Riemannian representation learning for multi-source domain adaptation

期刊

PATTERN RECOGNITION
卷 137, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2022.109271

关键词

Convex optimization; Hellinger distance; Multi-source domain adaptation; Representation learning; Riemannian manifold

向作者/读者索取更多资源

Multi-Source Domain Adaptation (MSDA) aims to train a classification model that achieves small target error by utilizing labeled data from multiple source domains and unlabeled data from a target domain. This paper characterizes the joint distribution difference using Hellinger distance, and shows the upper bound of the target error of a neural network classification model. Motivated by the error bound, Riemannian Representation Learning (RRL) is introduced to train the network model by minimizing the average empirical Hellinger distance and empirical source error.
Multi-Source Domain Adaptation (MSDA) aims at training a classification model that achieves small target error, by leveraging labeled data from multiple source domains and unlabeled data from a target domain. The source and target domains are described by related but different joint distributions, which lie on a Riemannian manifold named the statistical manifold. In this paper, we characterize the joint distribution difference by the Hellinger distance, which bears strong connection to the Riemannian metric defined on the statistical manifold. We show that the target error of a neural network classification model is upper bounded by the average source error of the model and the average Hellinger distance, i.e., the average of multiple Hellinger distances between the source and target joint distributions in the network represen-tation space. Motivated by the error bound, we introduce Riemannian Representation Learning (RRL): An approach that trains the network model by minimizing (i) the average empirical Hellinger distance with respect to the representation function, and (ii) the average empirical source error with respect to the network model. Specifically, we derive the average empirical Hellinger distance by constructing and solv-ing unconstrained convex optimization problems whose global optimal solutions are easy to find. With the network model trained, we expect it to achieve small error in the target domain. Our experimental results on several image datasets demonstrate that the proposed RRL approach is statistically better than the comparison methods. (c) 2022 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据