4.7 Article

UIEGAN: Adversarial Learning-Based Photorealistic Image Enhancement for Intelligent Underwater Environment Perception

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2023.3281741

Keywords

Image enhancement; Image color analysis; Vision sensors; Imaging; Visualization; Image sensors; Image restoration; Attention mechanism; generative adversarial networks (GANs); underwater image enhancement (UIE); vision sensors

Ask authors/readers for more resources

This article proposes a lightweight encoder-decoder architecture (UIENet) to enhance underwater images and reports better performance than state-of-the-art schemes. The proposed method surpasses the baselines on benchmark datasets and demonstrates its generalization performance through testing on various datasets.
Underwater image enhancement (UIE) is an essential task for intelligent environment perception in underwater remote visual sensing scenarios. However, the computing power of mobile platforms limits the usage of larger scale models. In this article, we propose a lightweight encoder-decoder architecture [UIE network (UIENet)] to enhance underwater images from visual sensors. We also involve the architecture into a generative adversarial network (UIEGAN) model against a supervised discriminator to further perfect its corrective capabilities for the photorealistic images with more global appearance and local details. The multiresolution counterparts are embedded into the generator to diversify the feature representation of the original inputs. Further, UIEGAN guides the spatial attention module (SAM) and the channel attention module (CAM) to jointly enhance the global-local connection of the image. We evaluate the proposed method on benchmark datasets of UIEB and UFO-120 and report better performance than the state-of-the-art (SOTA) schemes, exceeding 15.43% and 12.85% on peak signal-to-noise ratio (PSNR) than the baselines of these datasets. Besides, by testing on the UIEB challenge, URPC and SQUID datasets without any reference images, our scheme outperforms the other methods on evaluation metrics to validate its generalization performance, and meanwhile uses a series of ablation study demonstrates the effectiveness of the functional modules.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available