4.7 Article

MixStyle Neural Networks for Domain Generalization and Adaptation

期刊

出版社

SPRINGER
DOI: 10.1007/s11263-023-01913-8

关键词

-

向作者/读者索取更多资源

The researchers propose MixStyle, a module that addresses the problem of neural networks struggling to generalize to unseen data with domain shifts. MixStyle achieves this by mixing feature statistics of random instances during training, synthesizing new domains in the feature space and improving domain generalization performance.
Neural networks do not generalize well to unseen data with domain shifts-a longstanding problem in machine learning and AI. To overcome the problem, we propose MixStyle, a simple plug-and-play, parameter-free module that can improve domain generalization performance without the need to collect more data or increase model capacity. The design of MixStyle is simple: it mixes the feature statistics of two random instances in a single forward pass during training. The idea is grounded by the finding from recent style transfer research that feature statistics capture image style information, which essentially defines visual domains. Therefore, mixing feature statistics can be seen as an efficient way to synthesize new domains in the feature space, thus achieving data augmentation. MixStyle is easy to implement with a few lines of code, does not require modification to training objectives, and can fit a variety of learning paradigms including supervised domain generalization, semi-supervised domain generalization, and unsupervised domain adaptation. Our experiments show that MixStyle can significantly boost out-of-distribution generalization performance across a wide range of tasks including image recognition, instance retrieval and reinforcement learning. The source code is released at https://github.com/KaiyangZhou/mixstyle-release.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据