Journal
IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 28, Issue 9, Pages 4364-4375Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2019.2910412
Keywords
Low-light image enhancement; convolutional neural network; recurrent neural network
Funding
- National Natural Science Foundation of China [U1736219, U1605252, U1803264, 61532006, 61772083, 61802403]
- National Key R&D Program of China [2018YFB0803701]
- Beijing Natural Science Foundation [L182057]
- CCF-Tencent Open Fund
Ask authors/readers for more resources
Camera sensors often fail to capture clear images or videos in a poorly lit environment. In this paper, we propose a trainable hybrid network to enhance the visibility of such degraded images. The proposed network consists of two distinct streams to simultaneously learn the global content and the salient structures of the clear image in a unified network. More specifically, the content stream estimates the global content of the low-light input through an encoder-decoder network. However, the encoder in the content stream tends to lose some structure details. To remedy this, we propose a novel spatially variant recurrent neural network (RNN) as an edge stream to model edge details, with the guidance of another auto-encoder. The experimental results show that the proposed network favorably performs against the state-of-the-art low-light image enhancement algorithms.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available