4.7 Article

Self-Supervised Learning by Estimating Twin Class Distribution

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 32, 期 -, 页码 2228-2236

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2023.3266169

关键词

Task analysis; Mutual information; Entropy; Probability distribution; Self-supervised learning; Classification algorithms; Neural networks; Unsupervised learning; image classification

向作者/读者索取更多资源

This paper presents Twist, a self-supervised representation learning method that classifies large-scale unlabeled datasets in an end-to-end manner. The authors use a siamese network with a softmax operation to generate twin class distributions for augmented images. By maximizing the mutual information between input images and output class predictions, Twist avoids collapsed solutions and achieves state-of-the-art performance on various tasks. On the semi-supervised classification task, Twist outperforms previous methods by 6.2% improvement in top-1 accuracy using 1% ImageNet labels with a ResNet-50 backbone. Codes and pre-trained models are available at https://github.com/bytedance/TWIST.
We present Twist, a simple and theoretically explainable self-supervised representation learning method by classifying large-scale unlabeled datasets in an end-to-end way. We employ a siamese network terminated by a softmax operation to produce twin class distributions of two augmented images. Without supervision, we enforce the class distributions of different augmentations to be consistent. However, simply minimizing the divergence between augmentations will generate collapsed solutions, i.e., outputting the same class distribution for all images. In this case, little information about the input images is preserved. To solve this problem, we propose to maximize the mutual information between the input image and the output class predictions. Specifically, we minimize the entropy of the distribution for each sample to make the class prediction assertive, and maximize the entropy of the mean distribution to make the predictions of different samples diverse. In this way, Twist can naturally avoid the collapsed solutions without specific designs such as asymmetric network, stop-gradient operation, or momentum encoder. As a result, Twist outperforms previous state-of-the-art methods on a wide range of tasks. Specifically on the semi-supervised classification task, Twist achieves 61.2% top-1 accuracy with 1% ImageNet labels using a ResNet-50 as backbone, surpassing previous best results by an improvement of 6.2%. Codes and pre-trained models are available at https://github.com/bytedance/TWIST

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据