4.6 Article

Multimodal image fusion via coupled feature learning

期刊

SIGNAL PROCESSING
卷 200, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.sigpro.2022.108637

关键词

Multimodal image fusion; Coupled dictionary learning; Joint sparse representation; Multimodal medical imaging; Infrared images

向作者/读者索取更多资源

This paper presents a multimodal image fusion method based on coupled dictionary learning, which effectively preserves texture details and modality-specific information, and achieves excellent performance in both visual and objective evaluations.
This paper presents a multimodal image fusion method using a novel decomposition model based on coupled dictionary learning. The proposed method is general and can be used for a variety of imaging modalities. In particular, the images to be fused are decomposed into correlated and uncorrelated components using sparse representations with identical supports and a Pearson correlation constraint, respectively. The resulting optimization problem is solved by an alternating minimization algorithm. Contrary to other learning-based fusion methods, the proposed approach does not require any training data, and the correlated features are extracted online from the data itself. By preserving the uncorrelated components in the fused images, the proposed fusion method significantly improves on current fusion approaches in terms of maintaining the texture details and modality-specific information. The maximum-absolute-value rule is used for the fusion of correlated components only. This leads to an enhanced contrast-resolution without causing intensity attenuation or loss of important information. Experimental results show that the proposed method achieves superior performance in terms of both visual and objective evaluations compared to state-of-the-art image fusion methods. (C) 2022 The Author(s). Published by Elsevier B.V.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据