4.6 Article

Cartoon-texture guided network for low-light image enhancement

Journal

DIGITAL SIGNAL PROCESSING
Volume 144, Issue -, Pages -

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.dsp.2023.104271

Keywords

Low-light image enhancement; Cartoon and texture components; Image decomposition; Normalizing flow; Frequency domain network

Ask authors/readers for more resources

This paper investigates the task of recovering normal-exposure images from low-light images and proposes a cartoon-texture guided network called CatNet. By utilizing a cartoon-guided normalizing flow and an elaborated frequency domain attention mechanism, CatNet is able to enhance images while preserving more details and richer colors.
Recovering normal-exposure images from low-light images is a challenging task. Recent works have built a great deal of deep learning methods to address this task. Nevertheless, most of them treat cartoon and texture components in the same way, resulting in a loss of details. Recent effort, i.e. unfolding total variation network (UTVNet), is proposed, which recovers normal-light image by roughly decomposing the image into a noise -free smoothing layer and a detail layer using total variation (TV) regularization, and then processes the two components in different ways. However, its enhanced image exhibits color distortion owing to the limited representation ability of the TV model. To address this limitation, we design a cartoon-texture guided network named CatNet for low-light image enhancement. CatNet uses a cartoon-guided normalizing flow to retain cartoon information and an elaborated frequency domain attention mechanism in U-Net denoted as FAU-Net to recover texture information. Concretely, the ground-truth image is decomposed into cartoon and texture components to guide the corresponding recovery modules training, respectively. We also design a hybrid loss in the spatial and frequency domains to train the CatNet. Compared to state-of-the-art methods, our method gets better results, obtaining richer colors and more details. The source code and datasets have been made publicly available at https://github .com /shibaoshun /CatNet.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available