4.4 Article

Generative adversarial network for low-light image enhancement

Journal

IET IMAGE PROCESSING
Volume 15, Issue 7, Pages 1542-1552

Publisher

WILEY
DOI: 10.1049/ipr2.12124

Keywords

-

Funding

  1. Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University [CX201959]
  2. Synergy Innovation Foundation of the University and Enterprise for Graduate Students at Northwestern Polytechnical University [XQ201910]
  3. National Natural Science Foundation of China [61972321]

Ask authors/readers for more resources

This paper proposes an effective generative adversarial network structure for enhancing low-light image quality. The method utilizes residual blocks and enhancing blocks to improve feature diversity and employs a loss function to recover contextual and local details. Experimental results demonstrate that the method performs well in dealing with low-light scenarios.
Low-light image enhancement is rapidly gaining research attention due to the increasing demands of extreme visual tasks in various applications. Although numerous methods exist to enhance image qualities in low light, it is still undetermined how to trade-off between the human observation and computer vision processing. In this work, an effective generative adversarial network structure is proposed comprising both the densely residual block (DRB) and the enhancing block (EB) for low-light image enhancement. Specifically, the proposed end-to-end image enhancement method, consisting of a generator and a discriminator, is trained using the hyper loss function. The DRB adopts the residual and dense skip connections to connect and enhance the features extracted from different depths in the network while the EB receives unique multi-scale features to ensure feature diversity. Additionally, increasing the feature sizes allows the discriminator to further distinguish between fake and real images from the patch levels. The merits of the loss function are also studied to recover both contextual and local details. Extensive experimental results show that our method is capable of dealing with extremely low-light scenes and the realistic feature generator outperforms several state-of-the-art methods in a number of qualitative and quantitative evaluation tests.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available