4.6 Article

Improved Procedure for Multi-Focus Image Quality Enhancement Using Image Fusion with Rules of Texture Energy Measures in the Hybrid Wavelet Domain

Journal

APPLIED SCIENCES-BASEL
Volume 13, Issue 4, Pages -

Publisher

MDPI
DOI: 10.3390/app13042138

Keywords

multi-focus image fusion (MIF); SWT; DTCWT; TEM; quality evaluation metrics; image quality

Ask authors/readers for more resources

Feature extraction is the process of collecting necessary detailed information from a given source for further analysis. The hybrid wavelet fusion algorithm, using Dual-Tree Complex Wavelet Transforms (DTCWT) combined with Stationary Wavelet Transform (SWT), overcomes the limitations of traditional wavelet-based fusion algorithms and preserves directional selectivity and shift invariance.
Feature extraction is a collection of the necessary detailed information from the given source, which holds the information for further analysis. The quality of the fused image depends on many parameters, particularly its directional selectivity and shift-invariance. On the other hand, the traditional wavelet-based transforms produce ringing distortions and artifacts due to poor directionality and shift invariance. The Dual-Tree Complex Wavelet Transforms (DTCWT) combined with Stationary Wavelet Transform (SWT) as a hybrid wavelet fusion algorithm overcomes the deficiencies of the traditional wavelet-based fusion algorithm and preserves the directional and shift invariance properties. The purpose of SWT is to decompose the given source image into approximate and detailed sub-bands. Further, approximate sub-bands of the given source are decomposed with DTCWT. In this extraction, low-frequency components are considered to implement Texture Energy Measures (TEM), and high-frequency components are considered to implement the absolute-maximum fusion rule. For the detailed sub-bands, the absolute-maximum fusion rule is implemented. The texture energy rules have significantly classified the image and improved the output image's accuracy after fusion. Finally, inverse SWT is applied to generate an extended fused image. Experimental results are evaluated to show that the proposed approach outperforms approaches reported earlier. This paper proposes a fusion method based on SWT, DTCWT, and TEM to address the inherent defects of both the Parameter Adaptive-Dual Channel Pulse coupled neural network (PA-DCPCNN) and Multiscale Transform-Convolutional Sparse Representation (MST-CSR).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available