4.6 Article

GAN-Based Image Colorization for Self-Supervised Visual Feature Learning

Journal

SENSORS
Volume 22, Issue 4, Pages -

Publisher

MDPI
DOI: 10.3390/s22041599

Keywords

self-supervised learning; transfer learning; image colorization; convolutional neural network; generative adversarial network

Funding

  1. Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University in Skopje, North Macedonia
  2. FCT/MEC
  3. FEDER-PT2020 partnership agreement [UIDB/50008/2020]
  4. COST (European Cooperation in Science and Technology) [CCA19121]

Ask authors/readers for more resources

This paper presents a self-supervised learning method using generative adversarial networks (GANs) for image colorization. By transferring the knowledge gained from colorization to other tasks, such as classification and segmentation, significant improvements in performance are achieved without manual annotation.
Large-scale labeled datasets are generally necessary for successfully training a deep neural network in the computer vision domain. In order to avoid the costly and tedious work of manually annotating image datasets, self-supervised learning methods have been proposed to learn general visual features automatically. In this paper, we first focus on image colorization with generative adversarial networks (GANs) because of their ability to generate the most realistic colorization results. Then, via transfer learning, we use this as a proxy task for visual understanding. Particularly, we propose to use conditional GANs (cGANs) for image colorization and transfer the gained knowledge to two other downstream tasks, namely, multilabel image classification and semantic segmentation. This is the first time that GANs have been used for self-supervised feature learning through image colorization. Through extensive experiments with the COCO and Pascal datasets, we show an increase of 5% for the classification task and 2.5% for the segmentation task. This demonstrates that image colorization with conditional GANs can boost other downstream tasks' performance without the need for manual annotation.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available