4.7 Article

LightingNet: An Integrated Learning Method for Low-Light Image Enhancement

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCI.2023.3240087

关键词

Histograms; Task analysis; Lighting; Deep learning; Reflection; Image enhancement; Performance evaluation; Generative adversarial network; low-light enhancement; vision transformer; learning transfer

向作者/读者索取更多资源

This paper proposes an integrated learning approach (LightingNet) for enhancing low-light images. LightingNet consists of two core components: the complementary learning sub-network and the vision transformer (VIT) low-light enhancement sub-network. The complementary learning sub-network provides global fine-tuned features through learning transfer, while the VIT low-light enhancement sub-network provides local high-level features through a full-scale architecture. Extensive experiments confirm the effectiveness of LightingNet.
Images captured in low-light environments suffer from serious degradation due to insufficient light, leading to the performance decline of industrial and civilian devices. To address the problems of noise, chromatic aberration, and detail distortion for enhancing low-light images using existing enhancement methods, this paper proposes an integrated learning approach (LightingNet) for low-light image enhancement. The LightingNet consists of two core components: 1) the complementary learning sub-network and 2) the vision transformer (VIT) low-light enhancement sub-network. VIT low-light enhancement sub-network is designed to learn and fit the current data to provide local high-level features through a full-scale architecture, and the complementary learning sub-network is utilized to provide global fine-tuned features through learning transfer. Extensive experiments confirm the effectiveness of the proposed LightingNet.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据