期刊
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
卷 71, 期 -, 页码 -出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIM.2022.3191664
关键词
Deep learning; feature normalization; image fusion; self-attention mechanism; Swin Transformer
资金
- Fundamental Research Program of Shanxi Province [201901D111260]
- Open Foundation of Shanxi Key Laboratory of Signal Capturing and Processing [ISPT2020-4]
This study proposes a residual Swin Transformer fusion network called SwinFuse for infrared and visible image fusion. It models long-range dependencies with a fully attentional feature encoding backbone and designs a novel feature fusion strategy based on the L-1 norm. Experimental results demonstrate that SwinFuse achieves impressive fusion performance, generalization ability, and computational efficiency.
The existing deep learning fusion methods mainly concentrate on convolutional neural networks (CNNs), and few attempts are made with transformer. Meanwhile, the convolutional operation is a content-independent interaction between the image and the convolution kernel, which may lose some important contexts and further limit fusion performance. Toward this end, we present a simple and strong fusion baseline for infrared and visible images, namely, residual Swin Transformer fusion network, termed SwinFuse. Our SwinFuse includes three parts: the global feature extraction, fusion layer, and feature reconstruction. In particular, we build a fully attentional feature encoding backbone to model the long-range dependencies, which is a pure transformer network and has a stronger representation ability compared with the CNNs. Moreover, we design a novel feature fusion strategy based on the L-1-norm for sequence matrices and measure the corresponding activity levels from row and column vector dimensions, which can well retain competitive infrared brightness and distinct visible details. Finally, we testify our SwinFuse with nine state-of-the-art traditional and deep learning methods on three different datasets through subjective observations and objective comparisons, and the experimental results manifest that the proposed SwinFuse obtains surprising fusion performance with strong generalization ability and competitive computational efficiency. The code will be available at https://github.com/Zhishe-Wang/SwinFuse.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据