4.7 Article

SDNet: A Versatile Squeeze-and-Decomposition Network for Real-Time Image Fusion

Journal

INTERNATIONAL JOURNAL OF COMPUTER VISION
Volume 129, Issue 10, Pages 2761-2785

Publisher

SPRINGER
DOI: 10.1007/s11263-021-01501-8

Keywords

Image fusion; Real time; Adaptive; Proportion; Squeeze decomposition

Ask authors/readers for more resources

This paper introduces a squeeze-and-decomposition network (SDNet) for real-time multi-modal and digital photography image fusion. By transforming fusion problems into extraction and reconstruction of gradient and intensity information, and introducing the squeeze and decomposition concept into image fusion, the method outperforms state-of-the-art techniques in subjective visual effect and quantitative metrics in various fusion tasks, while also being much faster for real-time fusion tasks.
In this paper, a squeeze-and-decomposition network (SDNet) is proposed to realize multi-modal and digital photography image fusion in real time. Firstly, we generally transform multiple fusion problems into the extraction and reconstruction of gradient and intensity information, and design a universal form of loss function accordingly, which is composed of intensity term and gradient term. For the gradient term, we introduce an adaptive decision block to decide the optimization target of the gradient distribution according to the texture richness at the pixel scale, so as to guide the fused image to contain richer texture details. For the intensity term, we adjust the weight of each intensity loss term to change the proportion of intensity information from different images, so that it can be adapted to multiple image fusion tasks. Secondly, we introduce the idea of squeeze and decomposition into image fusion. Specifically, we consider not only the squeeze process from source images to the fused result, but also the decomposition process from the fused result to source images. Because the quality of decomposed images directly depends on the fused result, it can force the fused result to contain more scene details. Experimental results demonstrate the superiority of our method over the state-of-the-arts in terms of subjective visual effect and quantitative metrics in a variety of fusion tasks. Moreover, our method is much faster than the state-of-the-arts, which can deal with real-time fusion tasks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available