3.8 Proceedings Paper

Medical Transformer: Gated Axial-Attention for Medical Image Segmentation

出版社

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-030-87193-2_4

关键词

Transformers; Medical image segmentation; Self-attention

资金

  1. NSF [1910141]
  2. Div Of Information & Intelligent Systems
  3. Direct For Computer & Info Scie & Enginr [1910141] Funding Source: National Science Foundation

向作者/读者索取更多资源

Deep convolutional neural networks have been widely adopted in medical image segmentation, but lack understanding of long-range dependencies due to inherent biases in convolutional architectures. Transformer-based architectures leverage self-attention mechanism to encode long-range dependencies, motivating the exploration of transformer solutions for medical image segmentation tasks.
Over the past decade, deep convolutional neural networks have been widely adopted for medical image segmentation and shown to achieve adequate performance. However, due to inherent inductive biases present in convolutional architectures, they lack understanding of long-range dependencies in the image. Recently proposed transformer-based architectures that leverage self-attention mechanism encode long-range dependencies and learn representations that are highly expressive. This motivates us to explore transformer-based solutions and study the feasibility of using transformer-based network architectures for medical image segmentation tasks. Majority of existing transformer-based network architectures proposed for vision applications require large-scale datasets to train properly. However, compared to the datasets for vision applications, in medical imaging the number of data samples is relatively low, making it difficult to efficiently train transformers for medical imaging applications. To this end, we propose a gated axial-attention model which extends the existing architectures by introducing an additional control mechanism in the self-attention module. Furthermore, to train the model effectively on medical images, we propose a Local-Global training strategy (LoGo) which further improves the performance. Specifically, we operate on the whole image and patches to learn global and local features, respectively. The proposed Medical Transformer (MedT) is evaluated on three different medical image segmentation datasets and it is shown that it achieves better performance than the convolutional and other related transformer-based architectures. Code: https://github.com/jeya-maria-jose/Medical-Transformer

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据