3.8 Article

Image fusion based on shift invariant shearlet transform and stacked sparse autoencoder

Journal

Publisher

SAGE PUBLICATIONS LTD
DOI: 10.1177/1748301817741001

Keywords

Image fusion; stacked sparse autoencoder; shift invariant shearlet transform; feature extraction

Funding

  1. National Natural Science Foundation of P. R. China [61772237, BK20151358, BK20151202]
  2. Ministry of Housing and Urban-rural Development of the People's Republic of China [2015-K8-035]
  3. Fundamental Research Funds for the Central Universities [JUSRP51618B]
  4. Ministry of Education [6141A02033312]

Ask authors/readers for more resources

Stacked sparse autoencoder is an efficient unsupervised feature extraction method, which has excellent ability in representation of complex data. Besides, shift invariant shearlet transform is a state-of-the-art multiscale decomposition tool, which is superior to traditional tools in many aspects. Motivated by the advantages mentioned above, a novel stacked sparse autoencoder and shift invariant shearlet transform-based image fusion method is proposed. First, the source images are decomposed into low- and high-frequency subbands by shift invariant shearlet transform; second, a two-layer stacked sparse autoencoder is adopted as a feature extraction method to get deep and sparse representation of high-frequency subbands; third, a stacked sparse autoencoder feature-based choose-max fusion rule is proposed to fuse the high-frequency subband coefficients; then, a weighted average fusion rule is adopted to merge the low-frequency subband coefficients; finally, the fused image is obtained by inverse shift invariant shearlet transform. Experimental results show the proposed method is superior to the conventional methods both in terms of subjective and objective evaluations.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available