4.6 Article

Joint-product representation learning for domain generalization in classification and regression

期刊

NEURAL COMPUTING & APPLICATIONS
卷 35, 期 22, 页码 16509-16526

出版社

SPRINGER LONDON LTD
DOI: 10.1007/s00521-023-08520-1

关键词

Domain generalization; Domain alignment; Representation learning; Variational characterization

向作者/读者索取更多资源

In this work, the authors address the problem of domain generalization, which is the task of generalizing a prediction model trained on one set of domains to an unseen but related target domain. They propose a neural network representation function to align joint and product distributions in the representation space, effectively aligning multiple domains. Experimental results on synthetic and real-world datasets demonstrate the effectiveness of the proposed solution, achieving the best average classification accuracy of 82.26% on the text dataset Amazon Reviews and the best average regression error of 0.114 on the WiFi dataset UJIIndoorLoc.
In this work, we study the problem of generalizing a prediction (classification or regression) model trained on a set of source domains to an unseen target domain, where the source and target domains are different but related, i.e, the domain generalization problem. The challenge in this problem lies in the domain difference, which could degrade the generalization ability of the prediction model. To tackle this challenge, we propose to learn a neural network representation function to align a joint distribution and a product distribution in the representation space, and show that such joint-product distribution alignment conveniently leads to the alignment of multiple domains. In particular, we align the joint distribution and the product distribution under the L-2-distance, and show that this distance can be analytically estimated by exploiting its variational characterization and a linear variational function. This allows us to comfortably align the two distributions by minimizing the estimated distance with respect to the network representation function. Our experiments on synthetic and real-world datasets for classification and regression demonstrate the effectiveness of the proposed solution. For example, it achieves the best average classification accuracy of 82.26% on the text dataset Amazon Reviews, and the best average regression error of 0.114 on the WiFi dataset UJIIndoorLoc.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据