4.7 Article

RFN-Nest: An end-to-end residual fusion network for infrared and visible images

期刊

INFORMATION FUSION
卷 73, 期 -, 页码 72-86

出版社

ELSEVIER
DOI: 10.1016/j.inffus.2021.02.023

关键词

Image fusion; End-to-end network; Nest connection; Residual network; Infrared image; Visible image

资金

  1. National Natural Science Foundation of China [62020106012, U1836218, 61672265]
  2. 111 Project of Ministry of Education of China [B12018]
  3. Engineering and Physical Sciences Research Council (EPSRC) [EP/N007743/1, EP/R018456/1]

向作者/读者索取更多资源

In the field of image fusion, designing deep learning-based fusion methods is challenging due to the need to choose an appropriate fusion strategy for specific tasks. The study introduces a novel end-to-end fusion network architecture and utilizes a residual architecture to replace traditional fusion methods, with proposed loss functions for training and a two-stage training strategy, achieving superior performance to existing methods.
In the image fusion field, the design of deep learning-based fusion methods is far from routine. It is invariably fusion-task specific and requires a careful consideration. The most difficult part of the design is to choose an appropriate strategy to generate the fused image for a specific task in hand. Thus, devising learnable fusion strategy is a very challenging problem in the community of image fusion. To address this problem, a novel end-to-end fusion network architecture (RFN-Nest) is developed for infrared and visible image fusion. We propose a residual fusion network (RFN) which is based on a residual architecture to replace the traditional fusion approach. A novel detail-preserving loss function, and a feature enhancing loss function are proposed to train RFN. The fusion model learning is accomplished by a novel two-stage training strategy. In the first stage, we train an auto-encoder based on an innovative nest connection (Nest) concept. Next, the RFN is trained using the proposed loss functions. The experimental results on public domain data sets show that, compared with the existing methods, our end-to-end fusion network delivers a better performance than the state-of-the-art methods in both subjective and objective evaluation. The code of our fusion method is available at https://github.com/hli1221/imagefusion-rfn-nest.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据