期刊
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
卷 35, 期 8, 页码 8052-8072出版社
IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2022.3178128
关键词
Domain generalization; domain adaptation; transfer learning; out-of-distribution generalization
This paper provides the first review of recent advances in domain generalization, discussing the formal definition, related fields, theories, algorithms, datasets, applications, and potential research topics. It categorizes algorithms into data manipulation, representation learning, and learning strategy, and presents popular algorithms in each category. It also introduces a codebase for fair evaluation.
Machine learning systems generally assume that the training and testing distributions are the same. To this end, a key requirement is to develop models that can generalize to unseen distributions. Domain generalization (DG), i.e., out-of-distribution generalization, has attracted increasing interests in recent years. Domain generalization deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to an unseen test domain. Great progress has been made in the area of domain generalization for years. This paper presents the first review of recent advances in this area. First, we provide a formal definition of domain generalization and discuss several related fields. We then thoroughly review the theories related to domain generalization and carefully analyze the theory behind generalization. We categorize recent algorithms into three classes: data manipulation, representation learning, and learning strategy, and present several popular algorithms in detail for each category. Third, we introduce the commonly used datasets, applications, and our open-sourced codebase for fair evaluation. Finally, we summarize existing literature and present some potential research topics for the future.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据