4.6 Article

Improved Procedure for Multi-Focus Image Quality Enhancement Using Image Fusion with Rules of Texture Energy Measures in the Hybrid Wavelet Domain

期刊

APPLIED SCIENCES-BASEL
卷 13, 期 4, 页码 -

出版社

MDPI
DOI: 10.3390/app13042138

关键词

multi-focus image fusion (MIF); SWT; DTCWT; TEM; quality evaluation metrics; image quality

向作者/读者索取更多资源

Feature extraction is the process of collecting necessary detailed information from a given source for further analysis. The hybrid wavelet fusion algorithm, using Dual-Tree Complex Wavelet Transforms (DTCWT) combined with Stationary Wavelet Transform (SWT), overcomes the limitations of traditional wavelet-based fusion algorithms and preserves directional selectivity and shift invariance.
Feature extraction is a collection of the necessary detailed information from the given source, which holds the information for further analysis. The quality of the fused image depends on many parameters, particularly its directional selectivity and shift-invariance. On the other hand, the traditional wavelet-based transforms produce ringing distortions and artifacts due to poor directionality and shift invariance. The Dual-Tree Complex Wavelet Transforms (DTCWT) combined with Stationary Wavelet Transform (SWT) as a hybrid wavelet fusion algorithm overcomes the deficiencies of the traditional wavelet-based fusion algorithm and preserves the directional and shift invariance properties. The purpose of SWT is to decompose the given source image into approximate and detailed sub-bands. Further, approximate sub-bands of the given source are decomposed with DTCWT. In this extraction, low-frequency components are considered to implement Texture Energy Measures (TEM), and high-frequency components are considered to implement the absolute-maximum fusion rule. For the detailed sub-bands, the absolute-maximum fusion rule is implemented. The texture energy rules have significantly classified the image and improved the output image's accuracy after fusion. Finally, inverse SWT is applied to generate an extended fused image. Experimental results are evaluated to show that the proposed approach outperforms approaches reported earlier. This paper proposes a fusion method based on SWT, DTCWT, and TEM to address the inherent defects of both the Parameter Adaptive-Dual Channel Pulse coupled neural network (PA-DCPCNN) and Multiscale Transform-Convolutional Sparse Representation (MST-CSR).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据