4.6 Article

TransMed: Transformers Advance Multi-Modal Medical Image Classification

Journal

DIAGNOSTICS
Volume 11, Issue 8, Pages -

Publisher

MDPI
DOI: 10.3390/diagnostics11081384

Keywords

transformer; medical image classification; deep learning; multiparametric MRI; multi-modal

Funding

  1. National Natural Science Foundation of China [61902058, C61872075]
  2. Fundamental Research Funds for the Central Universities [N2019002, JC2019025]
  3. Natural Science Foundation of Liaoning Province [2019-ZD-0751]
  4. Medical Imaging Intelligence Research [N2124006-3]

Ask authors/readers for more resources

The article discusses the advantages and limitations of convolutional neural networks (CNN) and transformers in medical image analysis, proposing a method called TransMed that combines CNN and transformer for multi-modal medical image classification. The approach achieved significant performance improvement on two datasets and outperformed existing CNN-based models.
Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available