4.6 Article

Deep-Learning Segmentation of Epicardial Adipose Tissue Using Four-Chamber Cardiac Magnetic Resonance Imaging

Journal

DIAGNOSTICS
Volume 12, Issue 1, Pages -

Publisher

MDPI
DOI: 10.3390/diagnostics12010126

Keywords

epicardial adipose tissue quantification; automatic segmentation; cine four-chamber; fully convolutional networks; machine learning

Ask authors/readers for more resources

This study proposed an automated method for quantifying the area of epicardial adipose tissue (EAT) using deep learning segmentation with multi-frame fully convolutional networks (FCN) in cardiac magnetic resonance imaging (MRI). The method demonstrated comparable performance to inter-observers' bias and provided precise quantification of EAT area, which can assess patients' risk of EAT overload.
In magnetic resonance imaging (MRI), epicardial adipose tissue (EAT) overload remains often overlooked due to tedious manual contouring in images. Automated four-chamber EAT area quantification was proposed, leveraging deep-learning segmentation using multi-frame fully convolutional networks (FCN). The investigation involved 100 subjects-comprising healthy, obese, and diabetic patients-who underwent 3T cardiac cine MRI, optimized U-Net and FCN (noted FCNB) were trained on three consecutive cine frames for segmentation of central frame using dice loss. Networks were trained using 4-fold cross-validation (n = 80) and evaluated on an independent dataset (n = 20). Segmentation performances were compared to inter-intra observer bias with dice (DSC) and relative surface error (RSE). Both systole and diastole four-chamber area were correlated with total EAT volume (r = 0.77 and 0.74 respectively). Networks' performances were equivalent to inter-observers' bias (EAT: DSCInter = 0.76, DSCU-Net = 0.77, DSCFCNB = 0.76). U-net outperformed (p < 0.0001) FCNB on all metrics. Eventually, proposed multi-frame U-Net provided automated EAT area quantification with a 14.2% precision for the clinically relevant upper three quarters of EAT area range, scaling patients' risk of EAT overload with 70% accuracy. Exploiting multi-frame U-Net in standard cine provided automated EAT quantification over a wide range of EAT quantities. The method is made available to the community through a FSLeyes plugin.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available