4.5 Article

Classification of Phonocardiogram Based on Multi-View Deep Network

期刊

NEURAL PROCESSING LETTERS
卷 55, 期 4, 页码 3655-3670

出版社

SPRINGER
DOI: 10.1007/s11063-022-10771-3

关键词

Phonocardiogram; Multi-view deep network; MobileNet-LSTM; Gramian Angular Fields; Res2Net

向作者/读者索取更多资源

In this paper, a multi-view deep network for the classification of PCG signals is proposed. It can extract rich multi-view features from different modalities of PCG for accurate classification of cardiovascular diseases.
A phonocardiogram (PCG) is a plot of high-fidelity recording of the sounds of the heart obtained using an electronic stethoscope that is highly valuable in clinical medicine. It can help cardiologists diagnose cardiovascular diseases quickly and accurately. In this paper, we propose a multi-view deep network for the classification of PCG signals that can extract rich multi-view features from different modalities of PCG for classification. The model is mainly composed of two branches. In the first branch, we divide each PCG signal into three equal-length sub-signals, using Gramian Angular Fields to encode them from audio modality to two-dimensional image modality, and then Res2Net is applied to extract the image view features. In the second branch, we propose MobileNet-LSTM to extract the features of another view from preprocessed PCG signals. Finally, the features from these two views are fused and fed into the classifier for classification. Experiments show that our proposed method achieves 97.99% accuracy on the 2016 PhysioNet/CinC Challenge dataset, which is very competitive compared with the existing baseline models. In addition, the ablation experiment proves the necessity and effectiveness of our proposed method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据