4.7 Article

Fetal cardiac cycle detection in multi-resource echocardiograms using hybrid classification framework

出版社

ELSEVIER
DOI: 10.1016/j.future.2020.09.014

关键词

Deep convolution neural networks; Hybrid classification framework; Knowledge transfer; Temporal dependence fusion; Echocardiography; Cardiac cycle phase detection

资金

  1. National Natural Science Foundation of China [61572177]
  2. National Outstanding Youth Science Program of National Natural Science Foundation of China [61625202]
  3. Key Program of National Natural Science Foundation of China [61432005]
  4. International (Regional) Cooperation and Exchange Program of National Natural Science Foundation of China [61661146006]
  5. Postgraduate Scientific Research Innovation Project of Hunan, China [CX20190309]

向作者/读者索取更多资源

The article introduces a deep-learning hybrid framework for automatic detection of fetal echocardiograms, utilizing class score to locate end-systolic and end-diastolic frames. The framework integrates target detection, temporal dependency module, and classification module based on a domain-transferred deep convolutional neural network. Various CNN architectures and channel fusion strategies are explored, resulting in high accuracy of classification and minimal detection errors.
Accurate acquisition of end-systolic (ES) and end-diastolic (ED) frames from ultrasound videos of fetal echocardiograms is a key procedure in the automated biometric measurement and diagnosis in obstetric examination. Compared with adults, the fetal detection task remains an additional challenge due to the variation of cardiac anatomy with fetal position and sound-beam angle, variations in cardiac views of different gestational weeks, and faster heart rates. These challenges have led to multi-resource fetal echocardiogram data, which means that adult detection methods may not be applicable. We formulate this problem as a classification problem and present a deep-learning hybrid framework that uses class score to localize the ES and ED frames. To the best of our knowledge, this is the first framework that utilizes a hybrid classification framework for the detection task. The proposed architecture integrates the extracting region-of-interest (ROI) component based on target detection, retaining a temporal dependency module and classification module based on a domain-transferred deep convolutional neural network (CNN). We conduct YOLOv3 as a ROI module (RD) to extract attention regions for improving classification performance and determining the four-chamber view. Meanwhile, the temporal dependence is not lost by the merged neighbor frame difference into image channels. Different CNN architectures are explored herein, i.e., Xeception, ResNet, InceptionV3, MobileNet, NasNetmobile, and different channel fusion strategies, i.e., SF, DF, and MDF. The optimal deep-learning model consists of a MobileNet, MDF, and RD trained by adding a transition class strategy. On average, 94.84% accuracy of classification results was achieved, and the average detection errors of ES and ED frames are 1.25 and 0.80 frame, respectively. (C) 2020 Published by Elsevier B.V.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据