4.7 Article

K-Net: Integrate Left Ventricle Segmentation and Direct Quantification of Paired Echo Sequence

Journal

IEEE TRANSACTIONS ON MEDICAL IMAGING
Volume 39, Issue 5, Pages 1690-1702

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMI.2019.2955436

Keywords

Image segmentation; Task analysis; Two dimensional displays; Valves; Biomedical imaging; Three-dimensional displays; Germanium; Left ventricle; echocardiography sequence; paired apical views; segmentation; direct quantification

Funding

  1. Postgraduate Research and Practice Innovation Program of Jiangsu Province [KYCX17_0104]
  2. China Scholarship Council [201706090248]
  3. State's Key Project of Research and Development Plan [2017YFA0104302, 2017YFC0109202, 2018YFA0704102]
  4. National Natural Science Foundation [61871117, 61828101, 31571001]
  5. Science and Technology ProgramofGuangdong [2018B030333001]

Ask authors/readers for more resources

The integration of segmentation and direct quantification on the left ventricle (LV) from the paired apical views(i.e., apical 4-chamber and 2-chamber together) echo sequence clinically achieves the comprehensive cardiac assessment: multiview segmentation for anatomical morphology, and multidimensional quantification for contractile function. Direct quantification of LV, i.e., to automatically quantify multiple LV indices directly from the image via task-aware feature representation and regression, avoids accumulative error from the inter-step target. This integration sequentially makes a stereoscopical reflection of cardiac activity jointly from the paired orthogonal cross views sequences, overcoming limited observation with a single plane. We propose a K-shaped Unified Network (K-Net), the first end-to-end framework to simultaneously segment LV from apical 4-chamber and 2-chamber views, and directly quantify LV from major- and minor-axis dimensions (1D), area (2D), and volume (3D), in sequence. It works via four components: 1) the K-Net architecture with the Attention Junction enables heterogeneous tasks learning of segmentation task of pixel-wise classification, and direct quantification task of image-wise regression, by interactively introducing the information from segmentation to jointly promote spatial attention map to guide quantification focusing on LV-related region, and transferring quantification feedback to make global constraint on segmentation; 2) the Bi-ResLSTMs distributed in K-Net layer-by-layer hierarchically extract spatial-temporal information in echo sequence, with bidirectional recurrent and short-cut connection to model spatial-temporal information among all frames; 3) the Information Valve tailing the Bi-ResLSTMs selectively exchanges information among multiple views, by stimulating complementary information and suppressing redundant information to make the efficient cross-flow for each view; 4) the Evolution Loss comprehensively guides sequential data learning, with static constraint for frame values, and dynamic constraint for inter-frame value changes. The experiments show that our K-Net gains high performance with a Dice coefficient up to 91.44% and a mean absolute error of the major-axis dimension down to 2.74mm, which reveal its clinical potential.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available