4.6 Article

An Improved Hybrid Network With a Transformer Module for Medical Image Fusion

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JBHI.2023.3264819

关键词

Image fusion; transformer; self-adaptive weight fusion; self-reconstruction

向作者/读者索取更多资源

Medical image fusion technology plays a crucial role in computer-aided diagnosis by extracting useful cross-modality cues and generating high-quality fused images. Existing methods mainly focus on fusion rules, leaving room for improvement in cross-modal information extraction. In this study, we propose a novel encoder-decoder architecture with three technical novelties, including attribute-based self-reconstruction tasks, a hybrid network combining CNN and transformer modules, and a self-adaptive weight fusion rule. Extensive experiments demonstrate satisfactory performance on medical image datasets and other multimodal datasets.
Medical image fusion technology is an essential component of computer-aided diagnosis, which aims to extract useful cross-modality cues from raw signals to generate high-quality fused images. Many advanced methods focus on designing fusion rules, but there is still room for improvement in cross-modal information extraction. To this end, we propose a novel encoder-decoder architecture with three technical novelties. First, we divide the medical images into two attributes, namely pixel intensity distribution attributes and texture attributes, and thus design two self-reconstruction tasks to mine as many specific features as possible. Second, we propose a hybrid network combining a CNN and a transformer module to model both long-range and short-range dependencies. Moreover, we construct a self-adaptive weight fusion rule that automatically measures salient features. Extensive experiments on a public medical image dataset and other multimodal datasets show that the proposed method achieves satisfactory performance.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据