4.4 Article

Multi-modal image fusion based on saliency guided in NSCT domain

Journal

IET IMAGE PROCESSING
Volume 14, Issue 13, Pages 3188-3201

Publisher

WILEY
DOI: 10.1049/iet-ipr.2019.1319

Keywords

quadtrees; transforms; image fusion; feature extraction; interpolation; NSCT domain; complementary information; multiple original images; robust features; discriminant model; saliency information; image fusion algorithm; invariant knowledge; crucial infrared features; saliency advertising phase congruency-based rule; local Laplacian energy-based rule; high-pass sub-bands fusion; fusion image; local features; global features; source image; multimodal image fusion; nonsubsampled contourlet transform; Bezier interpolation; quadtree decomposition

Funding

  1. National Natural Science Foundation of China [61702032, 61573057, 61771042, 61404130316, 61400010302]
  2. Fundamental Research Funds for the Central Universities [2017JBZ002]

Ask authors/readers for more resources

Image fusion aims at aggregating the redundant and complementary information in multiple original images, the most challenging aspect is to design robust features and discriminant model, which enhances saliency information in the fused image. To address this issue, the authors develop a novel image fusion algorithm for preserving the invariant knowledge of the multimodal image. Specifically, they formulate a novel unified architecture based on non-subsampled contourlet transform (NSCT). Their method introduces Quadtree decomposition and Bezier interpolation to extract crucial infrared features. Furthermore, they propose a saliency advertising phase congruency-based rule and local Laplacian energy-based rule for low- and high-pass sub-bands fusion, respectively. In this approach, the fusion image could not only combine the local and global features of the source image to avoid smoothing the edge of the target, but also retain the minor scales details and resists the interference noise of the multi-modal image. Both objective assessments and subjective visions of experimental results indicate that the proposed algorithm performs competitively in both objective evaluation criteria and visual quality.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available