4.7 Article

Multi-view underwater image enhancement method via embedded fusion mechanism

Journal

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.engappai.2023.105946

Keywords

Underwater image; Multi-view input; Fusion mechanism; Deep learning

Ask authors/readers for more resources

Due to wavelength-dependent light absorption and scattering, underwater images often suffer from color cast and blurry details. To address these issues, a novel multi-feature underwater image enhancement method is proposed, which employs an embedded fusion mechanism. High-quality images are obtained through preprocessing, and the white balance algorithm and contrast-limited adaptive histogram equalization algorithm are used to extract rich features from multiple views. A multi-feature fusion module is designed to fully interact with features from multiple views, and a pixel-weighted channel attention module is suggested to calibrate the detailed features of the degraded images. Experimental results show that the MFEF method outperforms other state-of-the-art underwater image enhancement methods in various real-world datasets.
Due to wavelength-dependent light absorption and scattering, underwater images often appear with a colour cast and blurry details. Most existing deep learning methods utilize a single input end-to-end network structure, which leads to a single form and content of the extracted features. To address these problems, we present a novel multi-feature underwater image enhancement method via embedded fusion mechanism (MFEF). We find that the quality of reconstruction results is affected by the quality of the input image to some extent, and use pre-processing to obtain high-quality images, which can improve the final reconstruction effect. We introduce the white balance (WB) algorithm and the contrast-limited adaptive histogram equalization (CLAHE) algorithm employing multiple path inputs to extract different forms of rich features in multiple views. To fully interact with features from multiple views, we design a multi-feature fusion (MFF) module to fuse derived image features. We suggest a novel pixel-weighted channel attention module (PCAM) that calibrates the detailed features of the degraded images using a weight matrix to give diverse weights to the encoded features. Ultimately, our network utilizes a fusion mechanism-based encoder and decoder that can be applied to restore various underwater scenes. In the UIEB dataset, our PSNR increased by 10.2% compared to that of Ucolor. Extensive experimental results demonstrate that the MFEF method outperforms other state-of-the-art underwater image enhancement methods in various real-world datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available