4.7 Article

Self-Supervised Learning: Generative or Contrastive

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2021.3090866

关键词

Self-supervised learning; generative model; contrastive learning; deep learning

向作者/读者索取更多资源

Deep supervised learning has been successful, but it is limited by manual labels and vulnerable to attacks. In contrast, self-supervised learning utilizes input data as supervision, showing promising performance on representation learning. This survey comprehensively reviews self-supervised learning methods in computer vision, natural language processing, and graph learning.
Deep supervised learning has achieved great success in the last decade. However, its defects of heavy dependence on manual labels and vulnerability to attacks have driven people to find other paradigms. As an alternative, self-supervised learning (SSL) attracts many researchers for its soaring performance on representation learning in the last several years. Self-supervised representation learning leverages input data itself as supervision and benefits almost all types of downstream tasks. In this survey, we take a look into new self-supervised learning methods for representation in computer vision, natural language processing, and graph learning. We comprehensively review the existing empirical methods and summarize them into three main categories according to their objectives: generative, contrastive, and generative-contrastive (adversarial). We further collect related theoretical analysis on self-supervised learning to provide deeper thoughts on why self-supervised learning works. Finally, we briefly discuss open problems and future directions for self-supervised learning. An outline slide for the survey is provided(1).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据