3.8 Proceedings Paper

SELF-SUPERVISED LEARNING FOR FEW-SHOT IMAGE CLASSIFICATION

出版社

IEEE
DOI: 10.1109/ICASSP39728.2021.9413783

关键词

Few-shot learning; Self-supervised learning; Metric learning; Cross-domain

资金

  1. Alibaba Group

向作者/读者索取更多资源

This paper proposes a method to train a more generalized embedding network using self-supervised learning, which can provide robust representations for downstream tasks. Extensive comparisons on two few-shot classification datasets show better performance, and state-of-the-art results are achieved in cross-domain few-shot learning classification tasks.
Few-shot image classification aims to classify unseen classes with limited labelled samples. Recent works benefit from the meta-learning process with episodic tasks and can fast adapt to class from training to testing. Due to the limited number of samples for each task, the initial embedding network for meta-learning becomes an essential component and can largely affect the performance in practice. To this end, most of the existing methods highly rely on the efficient embedding network. Due to the limited labelled data, the scale of embedding network is constrained under a supervised learning(SL) manner which becomes a bottleneck of the few-shot learning methods. In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide robust representation for downstream tasks by learning from the data itself. We evaluate our work by extensive comparisons with previous baseline methods on two few-shot classification datasets (i.e., MiniImageNet and CUB) and achieve better performance over baselines. Tests on four datasets in cross-domain few-shot learning classification show that the proposed method achieves state-of-the-art results and further prove the robustness of the proposed model. Our code is available at https://github.com/phecy/SSL-FEW-SHOT.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据