期刊
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT V
卷 13435, 期 -, 页码 162-172出版社
SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-031-16443-9_16
关键词
Pure volumetric transformer; Tumor segmentation
We propose a Transformer architecture for volumetric segmentation, which effectively balances encoding local and global spatial cues and preserving information along all axes of the volume. The encoder benefits from self-attention mechanism to capture both local and global cues, while the decoder utilizes a parallel self and cross attention formulation for fine detail refinement. Experimental results demonstrate that our design choices lead to a computationally efficient model, achieving competitive and promising results on the Medical Segmentation Decathlon (MSD) brain tumor segmentation (BraTS) Task. Moreover, our model exhibits robustness against data corruptions. The code implementation of our model is publicly available.
We propose a Transformer architecture for volumetric segmentation, a challenging task that requires keeping a complex balance in encoding local and global spatial cues, and preserving information along all axes of the volume. Encoder of the proposed design benefits from self-attention mechanism to simultaneously encode local and global cues, while the decoder employs a parallel self and cross attention formulation to capture fine details for boundary refinement. Empirically, we show that the proposed design choices result in a computationally efficient model, with competitive and promising results on the Medical Segmentation Decathlon (MSD) brain tumor segmentation (BraTS) Task. We further show that the representations learned by our model are robust against data corruptions. Our code implementation is publicly available.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据