4.8 Article

Self-Supervised Learning Across Domains

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3070791

关键词

Task analysis; Visualization; Indexes; Adaptation models; Data models; Training; Image recognition; Self-supervision; domain generalization; domain adaptation; multi-task learning

资金

  1. ERC [637076]
  2. European Research Council (ERC) [637076] Funding Source: European Research Council (ERC)

向作者/读者索取更多资源

Human adaptability relies on learning from both supervised and unsupervised tasks, and this approach can be applied to object recognition across domains. A multi-task method combining supervised and self-supervised knowledge provides competitive results.
Human adaptability relies crucially on learning and merging knowledge from both supervised and unsupervised tasks: the parents point out few important concepts, but then the children fill in the gaps on their own. This is particularly effective, because supervised learning can never be exhaustive and thus learning autonomously allows to discover invariances and regularities that help to generalize. In this paper we propose to apply a similar approach to the problem of object recognition across domains: our model learns the semantic labels in a supervised fashion, and broadens its understanding of the data by learning from self-supervised signals on the same images. This secondary task helps the network to focus on object shapes, learning concepts like spatial orientation and part correlation, while acting as a regularizer for the classification task over multiple visual domains. Extensive experiments confirm our intuition and show that our multi-task method, combining supervised and self-supervised knowledge, provides competitive results with respect to more complex domain generalization and adaptation solutions. It also proves its potential in the novel and challenging predictive and partial domain adaptation scenarios.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据