期刊
JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING
卷 12, 期 6, 页码 6001-6018出版社
SPRINGER HEIDELBERG
DOI: 10.1007/s12652-020-02154-0
关键词
Medical image fusion; Wavelet transform; Curvelet transform; PCA; Pixel-level fusion; Feature-level fusion; Medical imaging modalities
This paper presents a hybrid algorithm for multimodal medical image fusion, incorporating both pixel and feature-level fusion methods. Experimental results demonstrate that the proposed method improves the quality of the final fused image in various aspects, such as Mutual Information, Correlation Coefficient, entropy, Structural Similarity Index, Peak Signal-to-Noise Ratio, and edge-based similarity measure.
Multimodal medical image fusion aims to reduce insignificant information and improve clinical diagnosis accuracy. The purpose of image fusion is to retain salient image features and detail information of multiple source images to yield a more informative fused image. A hybrid algorithm based on both pixel and feature levels of multimodal medical image fusion is presented in this paper. For the pixel-level fusion, the source images are decomposed into low- and high-frequency components using Discrete Wavelet Transform (DWT), and then the low-frequency coefficients are fused using maximum fusion rule. Thereafter, the curvelet transform is applied on the high-frequency coefficients. The obtained high-frequency subbands (fine scale) are fused using Principal Component Analysis (PCA) fusion rule. On the other hand, the feature-level fusion is accomplished by extracting various features form the coarse and detail subbands and using them for the fusion process. These features involve mean, variance, entropy, visibility, and standard deviation. Thereafter, the inverse curvelet transform is implemented on the fused high-frequency coefficients, and finally the resultant fused image is acquired by applying the inverse DWT on the fused low- and high-frequency components. The proposed method is evaluated and implemented on different pairs of medical image modalities. The results demonstrate that the proposed method improves the quality of the final fused image in terms of Mutual Information (MI), Correlation Coefficient (CC), entropy, Structural Similarity index (SSIM), Edge Strength Similarity for Image quality (ESSIM), Peak Signal-to-Noise Ratio (PSNR), and edge-based similarity measure (Q(AB/F)).
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据