4.7 Article

Infrared and visible image fusion via parallel scene and texture learning

期刊

PATTERN RECOGNITION
卷 132, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2022.108929

关键词

Image fusion; Infrared; Scene and texture learning; Recurrent neural network

资金

  1. National Natural Sci- ence Foundation of China [61773295]

向作者/读者索取更多资源

Image fusion plays a crucial role in computer vision tasks, but existing methods fail to fully extract and integrate features from source images. This paper proposes a parallel scene and texture learning method for infrared and visible image fusion, using two branches of deep neural networks to extract different features and reconstruct the fused image. Experimental results demonstrate significant improvements in qualitative and quantitative evaluations, with superior fused results in object detection tasks.
Image fusion plays a pivotal role in numerous high-level computer vision tasks. Existing deep learning -based image fusion methods usually leverage an implicit manner to achieve feature extraction, which would cause some characteristics of source images, e.g., contrast and structural information, are unable to be fully extracted and integrated into the fused images. In this work, we propose an infrared and visible image fusion method via parallel scene and texture learning. Our key objective is to deploy two branches of deep neural networks, namely the content branch and detail branch, to synchronously extract different characteristics from source images and then reconstruct the fused image. The content branch fo-cuses primarily on coarse-grained information and is deployed to estimate the global content of source images. The detail branch primarily pays attention to fine-grained information, and we design an omni-directional spatially variant recurrent neural networks in this branch to model the internal structure of source images more accurately and extract texture-related features in an explicit manner. Extensive ex-periments show that our approach achieves significant improvements over state-of-the-arts on qualita-tive and quantitative evaluations with comparatively less running time consumption. Meanwhile, we also demonstrate the superiority of our fused results in the object detection task. Our code is available at: https://github.com/Melon-Xu/PSTLFusion .(c) 2022 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据