4.7 Article

Self-Supervised Fusion for Multi-Modal Medical Images via Contrastive Auto-Encoding and Convolutional Information Exchange

期刊

IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
卷 18, 期 1, 页码 68-80

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MCI.2022.3223487

关键词

Knowledge engineering; Convolution; Redundancy; Neural networks; Estimation; Feature extraction; Transformers

向作者/读者索取更多资源

This paper proposes a self-supervised framework based on contrastive auto-encoding and convolutional information exchange for multi-modal medical fusion tasks. The proposed method constructs positive and negative result pairs and utilizes a novel contrastive loss to avoid information redundancy. It combines transformer and convolution neural networks in parallel to preserve both global and local features, and adopts a contribution estimation model for multi-modal medical image fusion. Experimental results show that the proposed method outperforms other state-of-the-art fusion approaches.
This paper proposes a self-supervised framework based on a contrastive auto-encoding and convolutional information exchange for multi-modal medical fusion tasks. It is well known that multi-modal medical images have the same and unique features, and information redundancy is easily led when source image features are extracted in pairs. Inspired by contrastive learning, this article constructs positive and negative results pairs and proposes a novel contrastive loss in an auto-encoder. The paired source images are considered as positive and negative results used to reconstruct the source images to avoid the information redundancy problem. This article proposes preserving both the global and local features based on prior knowledge by combining transformer and convolution neural networks in parallel as an auto-encoder. A contribution estimation model is adopted to fuse multi-modal medical images. In the contribution estimation stage, an information exchange block is designed to exchange the feature maps of source images in multi-kernel convolutions, and then the multi-convolutional features are utilized to estimate the best fusion contribution of the paired source images. Experiments demonstrate that our method gives the best results than other state-of-the-art fusion approaches.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据