3.8 Proceedings Paper

Deep neural networks for emotion recognition combining audio and transcripts

出版社

ISCA-INT SPEECH COMMUNICATION ASSOC
DOI: 10.21437/Interspeech.2018-2466

关键词

emotion recognition; deep neural networks; automatic speech recognition

向作者/读者索取更多资源

In this paper, we propose to improve emotion recognition by combining acoustic information and conversation transcripts. On the one hand, a LSTM network was used to detect emotion from acoustic features like f0, shimmer, jitter, MFCC, etc. On the other hand, a multi-resolution CNN was used to detect emotion from word sequences. This CNN consists of several parallel convolutions with different kernel sizes to exploit contextual information at different levels. A temporal pooling layer aggregates the hidden representations of different words into a unique sequence level embedding, from which we computed the emotion posteriors. We optimized a weighted sum of classification and verification losses. The verification loss tries to bring embeddings from same emotions closer while separating embeddings from different emotions. We also compared our CNN with state-of-the-art text-based hand-crafted features (e-vector). We evaluated our approach on the USC-IEMOCAP dataset as well as the dataset consisting of US English telephone speech. In the former, we used human-annotated transcripts while in the latter, we used ASR transcripts. The results showed fusing audio and transcript information improved unweighted accuracy by relative 24% for IEMOCAP and relative 3.4% for the telephone data compared to a single acoustic system.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据