4.6 Article

Deep-Learning Segmentation of Epicardial Adipose Tissue Using Four-Chamber Cardiac Magnetic Resonance Imaging

期刊

DIAGNOSTICS
卷 12, 期 1, 页码 -

出版社

MDPI
DOI: 10.3390/diagnostics12010126

关键词

epicardial adipose tissue quantification; automatic segmentation; cine four-chamber; fully convolutional networks; machine learning

向作者/读者索取更多资源

This study proposed an automated method for quantifying the area of epicardial adipose tissue (EAT) using deep learning segmentation with multi-frame fully convolutional networks (FCN) in cardiac magnetic resonance imaging (MRI). The method demonstrated comparable performance to inter-observers' bias and provided precise quantification of EAT area, which can assess patients' risk of EAT overload.
In magnetic resonance imaging (MRI), epicardial adipose tissue (EAT) overload remains often overlooked due to tedious manual contouring in images. Automated four-chamber EAT area quantification was proposed, leveraging deep-learning segmentation using multi-frame fully convolutional networks (FCN). The investigation involved 100 subjects-comprising healthy, obese, and diabetic patients-who underwent 3T cardiac cine MRI, optimized U-Net and FCN (noted FCNB) were trained on three consecutive cine frames for segmentation of central frame using dice loss. Networks were trained using 4-fold cross-validation (n = 80) and evaluated on an independent dataset (n = 20). Segmentation performances were compared to inter-intra observer bias with dice (DSC) and relative surface error (RSE). Both systole and diastole four-chamber area were correlated with total EAT volume (r = 0.77 and 0.74 respectively). Networks' performances were equivalent to inter-observers' bias (EAT: DSCInter = 0.76, DSCU-Net = 0.77, DSCFCNB = 0.76). U-net outperformed (p < 0.0001) FCNB on all metrics. Eventually, proposed multi-frame U-Net provided automated EAT area quantification with a 14.2% precision for the clinically relevant upper three quarters of EAT area range, scaling patients' risk of EAT overload with 70% accuracy. Exploiting multi-frame U-Net in standard cine provided automated EAT quantification over a wide range of EAT quantities. The method is made available to the community through a FSLeyes plugin.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据