4.7 Article

Emotionally-Relevant Features for Classification and Regression of Music Lyrics

Journal

IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
Volume 9, Issue 2, Pages 240-254

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAFFC.2016.2598569

Keywords

Affective computing; affective computing applications; music retrieval and generation; natural language processing; recognition of group emotion

Funding

  1. MOODetector project - Fundacao para Ciencia e a Tecnologia (FCT) [PTDC/EIA-EIA/102185/2008]
  2. Programa Operacional Tematico Factores de Competitividade (COMPETE) Portugal
  3. CISUC (Center for Informatics and Systems of the University of Coimbra)
  4. Fundação para a Ciência e a Tecnologia [PTDC/EIA-EIA/102185/2008] Funding Source: FCT

Ask authors/readers for more resources

This research addresses the role of lyrics in the music emotion recognition process. Our approach is based on several state of the art features complemented by novel stylistic, structural and semantic features. To evaluate our approach, we created a ground truth dataset containing 180 song lyrics, according to Russell's emotion model. We conduct four types of experiments: regression and classification by quadrant, arousal and valence categories. Comparing to the state of the art features (ngrams - baseline), adding other features, including novel features, improved the F-measure from 69.9, 82.7 and 85.6 percent to 80.1, 88.3 and 90 percent, respectively for the three classification experiments. To study the relation between features and emotions (quadrants) we performed experiments to identify the best features that allow to describe and discriminate each quadrant. To further validate these experiments, we built a validation set comprising 771 lyrics extracted from the AllMusic platform, having achieved 73.6 percent F-measure in the classification by quadrants. We also conducted experiments to identify interpretable rules that show the relation between features and emotions and the relation among features. Regarding regression, results show that, comparing to similar studies for audio, we achieve a similar performance for arousal and a much better performance for valence.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available