4.5 Article

Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion

Journal

MACHINE VISION AND APPLICATIONS
Volume 33, Issue 5, Pages -

Publisher

SPRINGER
DOI: 10.1007/s00138-022-01322-w

Keywords

Adaptive learning dictionary; Sparse representation; Multi-modality; Image fusion; Fusion metric

Funding

  1. Scientific and Technological Project of Henan Province, China [202102310536]
  2. Open Project Program of the Third Affiliated Hospital of Xinxiang Medical University [KFKTYB202109]

Ask authors/readers for more resources

In this study, an improved multi-modality image fusion method was proposed by combining the joint patch clustering-based adaptive dictionary and sparse representation to address the issue of gray inconsistency caused by the maximum L-1 norm fusion rule. Through quantitative evaluation and comparative experiments, it was demonstrated that the method has superiority in fusion metrics, image quality, and edge preservation.
For the image fusion method using sparse representation, the adaptive dictionary and fusion rule have a great influence on the multi-modality image fusion, and the maximum L-1 norm fusion rule may cause gray inconsistency in the fusion result. In order to solve this problem, we proposed an improved multi-modality image fusion method by combining the joint patch clustering-based adaptive dictionary and sparse representation in this study. First, we used a Gaussian filter to separate the high- and low-frequency information. Second, we adopted the local energy-weighted strategy to complete the low-frequency fusion. Third, we used the joint patch clustering algorithm to reconstruct an over-complete adaptive learning dictionary, designed a hybrid fusion rule depending on the similarity of multi-norm of sparse representation coefficients, and completed the high-frequency fusion. Last, we obtained the fusion result by transforming the frequency domain into the spatial domain. We adopted the fusion metrics to evaluate the fusion results quantitatively and proved the superiority of the proposed method by comparing the state-of-the-art image fusion methods. The results showed that this method has the highest fusion metrics in average gradient, general image quality, and edge preservation. The results also showed that this method has the best performance in subjective vision. We demonstrated that this method has strong robustness by analyzing the parameter's influence on the fusion result and consuming time. We extended this method to the infrared and visible image fusion and multi-focus image fusion perfectly. In summary, this method has the advantages of good robustness and wide application.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available