4.7 Article

D-UNet: A Dimension-Fusion U Shape Network for Chronic Stroke Lesion Segmentation

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TCBB.2019.2939522

关键词

MRI; stroke segmentation; deep learning; dimensional fusion

资金

  1. National Natural Science Foundation of China [61601450, 61871371, 81830056]
  2. Science and Technology Planning Project of Guangdong Province [2017B020227012, 2018B010109009]
  3. Basic Research Program of Shenzhen [JCYJ20180507182400762]
  4. Youth Innovation Promotion Association Program of Chinese Academy of Sciences [2019351]

向作者/读者索取更多资源

The paper proposes a new medical image segmentation method called D-UNet, which combines 2D and 3D convolutions to achieve better segmentation performance with shorter computation time than 3D networks. Additionally, an Enhance Mixing Loss function is introduced to address the issue of data imbalance between positive and negative samples for network training.
Assessing the location and extent of lesions caused by chronic stroke is critical for medical diagnosis, surgical planning, and prognosis. In recent years, with the rapid development of 2D and 3D convolutional neural networks (CNN), the encoder-decoder structure has shown great potential in the field of medical image segmentation. However, the 2D CNN ignores the 3D information of medical images, while the 3D CNN suffers from high computational resource demands. This paper proposes a new architecture called dimension-fusion-UNet (D-UNet), which combines 2D and 3D convolution innovatively in the encoding stage. The proposed architecture achieves a better segmentation performance than 2D networks, while requiring significantly less computation time in comparison to 3D networks. Furthermore, to alleviate the data imbalance issue between positive and negative samples for the network training, we propose a new loss function called Enhance Mixing Loss (EML). This function adds a weighted focal coefficient and combines two traditional loss functions. The proposed method has been tested on the ATLAS dataset and compared to three state-of-the-art methods. The results demonstrate that the proposed method achieves the best quality performance in terms of DSC = 0.5349 + 0.2763 and precision = 0.6331 + 0.295).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据