4.6 Article

A Survey on Semi-, Self- and Unsupervised Learning for Image Classification

Journal

IEEE ACCESS
Volume 9, Issue -, Pages 82146-82168

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3084358

Keywords

Training; Deep learning; Unsupervised learning; Task analysis; Taxonomy; Supervised learning; Measurement; Semi-supervised; self-supervised; unsupervised; image classification; deep learning; survey

Funding

  1. Land Schleswig-Holstein through the Open Access Publikationsfonds Funding Program

Ask authors/readers for more resources

Current deep learning strategies in computer vision are highly dependent on labeled data, which may not be feasible for many real-world problems. Therefore, incorporating unlabeled data and addressing issues like class imbalance and robustness is crucial. Future research trends include scalability, decreasing supervision needs, and combining ideas from different methods for improved performance.
While deep learning strategies achieve outstanding results in computer vision tasks, one issue remains: The current strategies rely heavily on a huge amount of labeled data. In many real-world problems, it is not feasible to create such an amount of labeled training data. Therefore, it is common to incorporate unlabeled data into the training process to reach equal results with fewer labels. Due to a lot of concurrent research, it is difficult to keep track of recent developments. In this survey, we provide an overview of often used ideas and methods in image classification with fewer labels. We compare 34 methods in detail based on their performance and their commonly used ideas rather than a fine-grained taxonomy. In our analysis, we identify three major trends that lead to future research opportunities. 1. State-of-the-art methods are scalable to real-world applications in theory but issues like class imbalance, robustness, or fuzzy labels are not considered. 2. The degree of supervision which is needed to achieve comparable results to the usage of all labels is decreasing and therefore methods need to be extended to settings with a variable number of classes. 3. All methods share some common ideas but we identify clusters of methods that do not share many ideas. We show that combining ideas from different clusters can lead to better performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available