4.6 Article

End-to-end music emotion variation detection using iteratively reconstructed deep features

Journal

MULTIMEDIA TOOLS AND APPLICATIONS
Volume 81, Issue 4, Pages 5017-5031

Publisher

SPRINGER
DOI: 10.1007/s11042-021-11584-7

Keywords

Music emotion recognition; Arousal; Valence; End-to-end deep learning; Bi-directional gated recurrent unit; Iterative reconstruction

Ask authors/readers for more resources

This study proposes a deep neural network-based solution for automatic music emotion recognition, which can extract emotion features directly from raw audio waveform and achieve high regression accuracy on the DEAM dataset.
Automatic music emotion recognition (MER) has received increased attention in areas of music information retrieval and user interface development. Music emotion variation detection (or dynamic MER) captures also temporal changes of emotion, and emotional content in music is expressed as a series of valence-arousal predictions. One of the issues in MER is extraction of emotional characteristics from audio signal. We propose a deep neural network based solution for mining music emotion-related salient features directly from raw audio waveform. The proposed architecture is based on stacking one-dimensional convolution layer, autoencoder-based layer with iterative reconstruction, and bidirectional gated recurrent unit. The tests on the DEAM dataset have shown that the proposed solution, in comparison with other state-of-the-art systems, can bring a significant improvement of the regression accuracy, notably for the valence dimension. It is shown that the proposed iterative reconstruction layer is able to enhance the discriminative properties of the features and further increase regression accuracy.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available