期刊
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
卷 70, 期 -, 页码 -出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIM.2021.3056645
关键词
Deep learning; disentangled representation; image fusion; infrared; visible
资金
- National Natural Science Foundation of China [61773295]
- Natural Science Foundation of Hubei Province [2019CFA037]
In this article, a novel decomposition method for visible and infrared image fusion (DRF) is proposed, which disentangles images into scene- and sensor modality-related representations and applies different fusion strategies, leading to comparable performance in terms of visual effect and quantitative metrics compared to the state of the art.
In this article, we propose a novel decomposition method by applying disentangled representation for visible and infrared image fusion (DRF). According to the imaging principle, we perform the decomposition depending on the source of information in the visible and infrared images. More concretely, we disentangle the images into the scene- and sensor modality (attribute)-related representations through the corresponding encoders, respectively. In this way, the unique information defined by the attribute-related representation is closer to the information captured by each type of sensor individually. Thus, the problem of inappropriate extraction of unique information can be alleviated. Then, different strategies are applied for the fusion of these different types of representations. Finally, the fused representations are fed into the pretrained generator to generate the fusion result. The qualitative and quantitative experiments on the publicly available TNO and RoadScene data sets demonstrate the comparable performance of our DRF over the state of the art in terms of both visual effect and quantitative metrics.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据