4.8 Article

Deep Convolutional Neural Network for Multi-Modal Image Restoration and Fusion

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2020.2984244

Keywords

Image fusion; Task analysis; Image restoration; Convolutional codes; Image reconstruction; Convolutional neural networks; Image coding; Multi-modal image restoration; image fusion; multi-modal convolutional sparse coding

Funding

  1. CSA-Imperial Scholarship

Ask authors/readers for more resources

In this paper, a novel deep convolutional neural network is proposed to address general multi-modal image restoration and fusion problems, drawing inspirations from a new multi-modal convolutional sparse coding model. The proposed CU-Net architecture automatically separates common and unique information, consisting of three modules: unique feature extraction, common feature preservation, and image reconstruction. Extensive numerical results validate the effectiveness of the method on various tasks such as RGB-guided depth image super-resolution and multi-focus image fusion.
In this paper, we propose a novel deep convolutional neural network to solve the general multi-modal image restoration (MIR) and multi-modal image fusion (MIF) problems. Different from other methods based on deep learning, our network architecture is designed by drawing inspirations from a new proposed multi-modal convolutional sparse coding (MCSC) model. The key feature of the proposed network is that it can automatically split the common information shared among different modalities, from the unique information that belongs to each single modality, and is therefore denoted with CU-Net, i.e., common and unique information splitting network. Specifically, the CU-Net is composed of three modules, i.e., the unique feature extraction module (UFEM), common feature preservation module (CFPM), and image reconstruction module (IRM). The architecture of each module is derived from the corresponding part in the MCSC model, which consists of several learned convolutional sparse coding (LCSC) blocks. Extensive numerical results verify the effectiveness of our method on a variety of MIR and MIF tasks, including RGB guided depth image super-resolution, flash guided non-flash image denoising, multi-focus and multi-exposure image fusion.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available