4.7 Article

Infrared and Visible Image Fusion via Decoupling Network

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIM.2022.3216413

关键词

Force; Lead; Loss measurement; Hybrid power systems; Image fusion; Information exchange; Optimization; Decoupling network; hybrid loss; image fusion; information exchange; saliency map

资金

  1. National Natural Science Foundation of China [61761045]
  2. Postgraduate Science Foundation of Yunnan University [2021Y257]

向作者/读者索取更多资源

This article proposes a decoupling network-based method for infrared and visible image fusion, which effectively retains the texture details and luminance information of the source images while preserving the high-contrast regions. Extensive experiments demonstrate that the proposed method generates fused images with saliency objects and clear details, outperforming other state-of-the-art methods.
In general, the goal of the existing infrared and visible image fusion (IVIF) methods is to make the fused image contain both the high-contrast regions of the infrared image and the texture details of the visible image. However, this definition would lead the fusion image losing information from the visible image in high-contrast areas. For this problem, this article proposed a decoupling network-based IVIF method (DNFusion), which utilizes the decoupled maps to design additional constraints on the network to force the network to retain the saliency information of the source image effectively. The current definition of image fusion is satisfied while effectively maintaining the saliency objective of the source images. Specifically, the feature interaction module (FIM) inside effectively facilitates the information exchange within the encoder and improves the utilization of complementary information. Also, a hybrid loss function constructed with weight fidelity loss, gradient loss, and decoupling loss ensures the fusion image to be generated to effectively preserve the source image's texture details and luminance information. The qualitative and quantitative comparison of extensive experiments demonstrates that our model can generate a fused image containing saliency objects and clear details of the source images, and the method we proposed has a better performance than other state-of-the-art (SOTA) methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据