4.6 Article

TB-MFCC multifuse feature for emergency vehicle sound classification using multistacked CNN - Attention BiLSTM

期刊

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.bspc.2023.105688

关键词

Augmentation; CNN; Feature extraction; FPR; MFCC; RMS; ZCR

向作者/读者索取更多资源

This paper focuses on developing a suitable model and algorithms for data augmentation, feature extraction, and classification in order to accurately identify and classify emergency vehicles based on sound. By using signal augmentation and a new feature extraction method, combined with convolutional neural networks and long short term memory models, the accuracy of vehicle sound identification and classification can be improved.
Vehicles equipped for emergencies like ambulances, fire engines, and police cruisers play a vital role in society by responding quickly to emergencies and helping to prevent loss of life and maintain order. Vehicle sound identification and classification are very important in the cities to identify emergency vehicles easily and to clear the traffic effectively. Convolutional Neural Network plays an important role in the accurate prediction of vehicles during an emergency. The main motive of this paper is to develop a suitable model and algorithms for data augmentation, feature extraction, and classification. The proposed TB-MFCC multifuse feature is comprised of data augmentation and feature extraction. First, in the proposed signal augmentation, each audio signal uses noise injection, stretching, shifting, and pitching separately and this process increases the number of instances in the dataset. The proposed augmentation reduces the overfitting problem in the network. Second, Triangular Bluestein Mel Frequency Cepstral Coefficients (TB-MFCC) are proposed and fused with Zero Crossing Rate (ZCR), Mel-frequency cepstral coefficients (MFCC), Root Mean Square (RMS), Chroma, and Tempogram to extract the exact feature which increases the accuracy and reduces the Mean Squared Error (MSE) of the model during classification. Finally, the proposed Multi-stacked Convolutional Neural Network (MCNN) with Attention-based Bidirectional Long Short Term Memory (A-BiLSTM) improves the nonlinear relationship among the features. The proposed Pooled Multifuse Feature Augmentation (PMFA) with MCNN & A-BiLSTM increases the accuracy (98.66 %), reduces the False Positive Rate (FPR) by 1.01 %, and loss (0 %). Thus the model predicts the sound without overfitting, underfitting, and vanishing gradient problems.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据