4.6 Article

Dehaze on small-scale datasets via self-supervised learning

Journal

VISUAL COMPUTER
Volume -, Issue -, Pages -

Publisher

SPRINGER
DOI: 10.1007/s00371-023-03079-3

Keywords

Pretext task; Single image dehazing; Self-supervised learning

Ask authors/readers for more resources

This paper proposes a simple yet effective self-supervised learning method to improve networks' performance on small-scale real-world datasets. By generating numerous denser hazy images from a real-world hazy image, networks learn key capabilities of enhancing contrast, leading to stable performance surpassing that of directly supervised learning.
Real-world dehazing datasets usually suffer from small scales because of high collection costs. If networks are trained with such insufficient data, it leads to not only low performance on objective metrics, but also visually insufficient contrast enhancement. Self-supervised learning helps networks learn useful knowledge from unlabeled data and further achieve better performance on small-scale data, which has achieved great success on high-level vision tasks. However, there are rare works to develop self-supervised learning on low-level vision tasks, such as dehazing. In this paper, we propose a simple but effective self-supervised learning method for dehazing, to improve networks' performance on small-scale real-world datasets. Our useful observations are twofold. First, generating visually pleasing haze-free images from real-world hazy images is very difficult, but generating visually pleasing denser hazy images is much easier. Second, forcing networks to reduce dense haze will enhance the contrast enhancement capability of networks, and it is beneficial for further dehazing. Therefore, we generate numerous denser hazy images rehazy from a real-world hazy image. With pretraining on image pairs [rehazy, hazy], networks learn key capabilities of enhancing contrast. Experiments show that it stably outperforms directly supervised learning by a considerable margin, but only spends a cheap extra pretraining time cost.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available