Journal
INFORMATION FUSION
Volume 76, Issue -, Pages 189-203Publisher
ELSEVIER
DOI: 10.1016/j.inffus.2021.06.002
Keywords
Infrared and visible image fusion; Low-quality information enhancement; Self-supervision; Feature adaption
Funding
- National Natural Science Foundation of China [62001450, 61801077]
Ask authors/readers for more resources
This paper proposes a novel self-supervised feature adaption framework for infrared and visible image fusion, which retains vital information by reconstructing source images and enhances fusion method's robustness, with experimental results demonstrating superior performance.
Benefitting from the strong feature extraction capability of deep learning, infrared and visible image fusion has made a great progress. Since infrared and visible images are obtained by different sensors with different imaging mechanisms, there exists domain discrepancy, which becomes stumbling block for effective fusion. In this paper, we propose a novel self-supervised feature adaption framework for infrared and visible image fusion. We implement a self-supervised strategy that facilitates the backbone network to extract features with adaption while retaining the vital information by reconstructing the source images. Specifically, we preliminary adopt an encoder network to extract features with adaption. Then, two decoders with attention mechanism blocks are utilized to reconstruct the source images in a self-supervised way, forcing the adapted features to contain vital information of the source images. Further, considering the case that source images contain low-quality information, we design a novel infrared and visible image fusion and enhancement model, improving the fusion method's robustness. Experiments are constructed to evaluate the proposed method qualitatively and quantitatively, which show that the proposed method achieves the state-of-art performance comparing with existing infrared and visible image fusion methods. Results are available at https://github.com/zhoafan/SFA-Fuse.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available