4.6 Article

Explore pretraining for few-shot learning

Journal

MULTIMEDIA TOOLS AND APPLICATIONS
Volume -, Issue -, Pages -

Publisher

SPRINGER
DOI: 10.1007/s11042-023-15223-1

Keywords

Computer vision; Deep learning; Image classification; Few-shot learning

Ask authors/readers for more resources

Few-shot learning aims to classify new categories with few samples. Pretraining the model on the base class can improve performance. We propose a two-stage model pretraining method to enhance the model's capacity to learn new categories. Experiments show that our method achieves state-of-the-art performance on mini-ImageNet and FC100 datasets for 1-shot and 5-shot tasks.
Few-shot learning aims to learn to classify new categories with a few samples. Pretraining the model on the base class can improve the performance of the model on the new category. To further improve the pretraining performance of the model on the base class, we propose a two-stage model pretraining method. In the first stage, we conduct Simsiam contrastive learning pretraining, which can help the model learn invariant knowledge. In the second stage, we conduct multi-task pretraining for general classification tasks and rotation prediction tasks, which can help the model learn the equivalent knowledge. Two pretraining stages can significantly enhance the model's capacity to learn new categories and enhance the effectiveness of few-shot categorization. Experiments show that our method achieves State-of-the-art few-shot classification performance on the mini-ImageNet and FC100 datasets for 1-shot and 5-shot tasks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available