4.7 Article

MATR: Multimodal Medical Image Fusion via Multiscale Adaptive Transformer

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 31, 期 -, 页码 5134-5149

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2022.3193288

关键词

Transformers; Image fusion; Single photon emission computed tomography; Magnetic resonance imaging; Transforms; Medical diagnostic imaging; Task analysis; Image fusion; biomedical image; transformer; adaptive convolution; deep learning

资金

  1. National Natural Science Foundation of China [62072348, 62176081]
  2. Science and Technology Major Project of Hubei Province (Next-Generation Artificial Intelligence (AI) Technologies) [2019AEA170]
  3. National Key Research and Development Program of China [2019YFC1509604]

向作者/读者索取更多资源

Multimodal medical image fusion, the merging of information from different modalities, is crucial for comprehensive diagnosis and surgical navigation. Existing deep learning-based methods have improved fusion results but still lack satisfactory performance. In this study, we propose an unsupervised method called MATR that uses a multiscale adaptive Transformer. MATR achieves accurate fusion by adapting the convolutional kernel based on global context and enhancing global semantic extraction. The network architecture is designed to capture useful multimodal information from different scales. The proposed method outperforms other methods in visual quality and quantitative evaluation, and shows good generalization capability.
Owing to the limitations of imaging sensors, it is challenging to obtain a medical image that simultaneously contains functional metabolic information and structural tissue details. Multimodal medical image fusion, an effective way to merge the complementary information in different modalities, has become a significant technique to facilitate clinical diagnosis and surgical navigation. With powerful feature representation ability, deep learning (DL)-based methods have improved such fusion results but still have not achieved satisfactory performance. Specifically, existing DL-based methods generally depend on convolutional operations, which can well extract local patterns but have limited capability in preserving global context information. To compensate for this defect and achieve accurate fusion, we propose a novel unsupervised method to fuse multimodal medical images via a multiscale adaptive Transformer termed MATR. In the proposed method, instead of directly employing vanilla convolution, we introduce an adaptive convolution for adaptively modulating the convolutional kernel based on the global complementary context. To further model long-range dependencies, an adaptive Transformer is employed to enhance the global semantic extraction capability. Our network architecture is designed in a multiscale fashion so that useful multimodal information can be adequately acquired from the perspective of different scales. Moreover, an objective function composed of a structural loss and a region mutual information loss is devised to construct constraints for information preservation at both the structural-level and the feature-level. Extensive experiments on a mainstream database demonstrate that the proposed method outperforms other representative and state-of-the-art methods in terms of both visual quality and quantitative evaluation. We also extend the proposed method to address other biomedical image fusion issues, and the pleasing fusion results illustrate that MATR has good generalization capability. The code of the proposed method is available at https://github.com/tthinking/MATR.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据