3.8 Proceedings Paper

A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation

出版社

SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-16443-9_16

关键词

Pure volumetric transformer; Tumor segmentation

向作者/读者索取更多资源

We propose a Transformer architecture for volumetric segmentation, which effectively balances encoding local and global spatial cues and preserving information along all axes of the volume. The encoder benefits from self-attention mechanism to capture both local and global cues, while the decoder utilizes a parallel self and cross attention formulation for fine detail refinement. Experimental results demonstrate that our design choices lead to a computationally efficient model, achieving competitive and promising results on the Medical Segmentation Decathlon (MSD) brain tumor segmentation (BraTS) Task. Moreover, our model exhibits robustness against data corruptions. The code implementation of our model is publicly available.
We propose a Transformer architecture for volumetric segmentation, a challenging task that requires keeping a complex balance in encoding local and global spatial cues, and preserving information along all axes of the volume. Encoder of the proposed design benefits from self-attention mechanism to simultaneously encode local and global cues, while the decoder employs a parallel self and cross attention formulation to capture fine details for boundary refinement. Empirically, we show that the proposed design choices result in a computationally efficient model, with competitive and promising results on the Medical Segmentation Decathlon (MSD) brain tumor segmentation (BraTS) Task. We further show that the representations learned by our model are robust against data corruptions. Our code implementation is publicly available.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据