4.7 Article

Dual-Attention-Based Feature Aggregation Network for Infrared and Visible Image Fusion

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIM.2023.3259021

关键词

Attention mechanisms; feature aggregation; image fusion

向作者/读者索取更多资源

In this article, a dual-attention-based feature aggregation network is proposed for infrared and visible image fusion. The network effectively aggregates useful features and adaptively integrates meaningful features through multiple branch channel attention and global-local spatial attention. The fusion process is evaluated using multiscale structural similarity as a loss function. Extensive experiments demonstrate the superiority of the proposed network compared to state-of-the-art methods on multiple datasets.
Infrared and visible image fusion aims to produce fused images which retain rich texture details and high pixel intensity in the source images. In this article, we propose a dual-attention-based feature aggregation network for infrared and visible image fusion. Specifically, we first design a multibranch channel-attention-based feature aggregation block (MBCA) by generating multiple branches to suppress useless features from different aspects. This block is also able to adaptively aggregate the meaningful features by exploiting the interdependencies between channel features. To gather more meaningful features during the fusion process, we further design a global-local spatial-attention-based feature aggregation block (GLSA), for progressively integrating features of source images. After that, we introduce multiscale structural similarity (MS-SSIM) as loss function to evaluate the structural differences between the fused image and the source images from multiple scales. In addition, the proposed network involves strong generalization ability since our fusion model is trained on the RoadScene dataset and tested directly on the TNO and MSRS datasets. Extensive experiments on these datasets demonstrate the superiority of our network compared with the current state-of-the-art methods. The source code will be released at https://github.com/tangjunyang/Dualattention.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据