4.7 Article

Image fusion with morphological component analysis

Journal

INFORMATION FUSION
Volume 18, Issue -, Pages 107-118

Publisher

ELSEVIER
DOI: 10.1016/j.inffus.2013.06.001

Keywords

Multiscale transform; Morphological component analysis; Sparse representation; Image fusion; Multi-component fusion

Funding

  1. NSFC of China [61071162]

Ask authors/readers for more resources

Image fusion can produce a single image that describes the scene better than the individual source image. One of the keys to image fusion algorithm is how to effectively and completely represent the source images. Morphological component analysis (MCA) believes that an image contains structures with different spatial morphologies and can be accordingly modeled as a superposition of cartoon and texture components, and that the sparse representations of these components can be obtained by some specific decomposition algorithms which exploit the structured dictionary. Compared with the traditional multi-scale decomposition, which has been successfully applied to pixel-level image fusion, MCA employs the morphological diversity of an image and provides more complete representation for an image. Taking advantage of this property, we propose a multi-component fusion method for multi-source images in this paper. In our method, source images are separated into cartoon and texture components, and essential fusion takes place on the representation coefficients of these two components. Our fusion scheme is verified on three kinds of images and compared with six single-component fusion methods. According to the visual perceptions and objective evaluations on the fused results, our method can produce better fused images in our experiments, compared with other single-component fusion methods. (C) 2013 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available