4.7 Article

Multi-Scale Hybrid Fusion Network for Single Image Deraining

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2021.3112235

Keywords

Rain; Image color analysis; Correlation; Task analysis; Image restoration; Distortion; Coherence; Attention mechanism; image deraining; multi-scale fusion; non-local network

Ask authors/readers for more resources

This study focuses on addressing the problem of generating rain-free images under complex rain conditions using deep learning models. By designing a multi-level pyramid structure, non-local fusion module, attention fusion module, and residual learning branch to handle different challenges, the results demonstrate that our method achieves superior performance in generating rain-free images.
Deep learning models have been able to generate rain-free images effectively, but the extension of these methods to complex rain conditions where rain streaks show various blurring degrees, shapes, and densities has remained an open problem. Among the major challenges are the capacity to encode the rain streaks and the sheer difficulty of learning multi-scale context features that preserve both global color coherence and exactness of detail. To address the first problem, we design a non-local fusion module (NFM) and an attention fusion module (AFM), and construct the multi-level pyramids' architecture to explore the local and global correlations of rain information from the rain image pyramid. More specifically, we apply the non-local operation to fully exploit the self-similarity of rain streaks and perform the fusion of multi-scale features along the image pyramid. To address the latter challenge, we additionally design a residual learning branch that is capable of adaptively bridging the gaps (e.g., texture and color information) between the predicted rain-free image and the clean background via a hybrid embedding representation. Extensive results have demonstrated that our proposed method is able to generate much better rain-free images on several benchmark datasets than the state-of-the-art algorithms. Moreover, we conduct the joint evaluation experiments with respect to deraining performance and the detection/segmentation accuracy to further verify the effectiveness of our deraining method for downstream vision tasks/applications. The source code is available at https://github.com/kuihua/MSHFN.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available