3.8 Proceedings Paper

Convolutional Gated Recurrent Neural Network Incorporating Spatial Features for Audio Tagging

Journal

Publisher

IEEE
DOI: 10.1109/IJCNN.2017.7966291

Keywords

-

Funding

  1. Engineering and Physical Sciences Research Council (EPSRC) of the UK [EP/N014111/1]
  2. China Scholarship Council (CSC)
  3. Engineering and Physical Sciences Research Council (EPSRC) of the UK [EP/N014111/1]
  4. China Scholarship Council (CSC)
  5. EPSRC [EP/N014111/1] Funding Source: UKRI

Ask authors/readers for more resources

Environmental audio tagging is a newly proposed task to predict the presence or absence of a specific audio event in a chunk. Deep neural network (DNN) based methods have been successfully adopted for predicting the audio tags in the domestic audio scene. In this paper, we propose to use a convolutional neural network (CNN) to extract robust features from mel-filter banks (MFBs), spectrograms or even raw waveforms for audio tagging. Gated recurrent unit (GRU) based recurrent neural networks (RNNs) are then cascaded to model the long-term temporal structure of the audio signal. To complement the input information, an auxiliary CNN is designed to learn on the spatial features of stereo recordings. We evaluate our proposed methods on Task 4 (audio tagging) of the Detection and Classification of Acoustic Scenes and Events 2016 (DCASE 2016) challenge. Compared with our recent DNN-based method, the proposed structure can reduce the equal error rate (EER) from 0.13 to 0.11 on the development set. The spatial features can further reduce the EER to 0.10. The performance of the end-to-end learning on raw waveforms is also comparable. Finally, on the evaluation set, we get the state-of-the-art performance with 0.12 EER while the performance of the best existing system is 0.15 EER.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available