4.7 Article

Jointly Aligning and Predicting Continuous Emotion Annotations

期刊

IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
卷 12, 期 4, 页码 1069-1083

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAFFC.2019.2917047

关键词

Delays; Emotion recognition; Convolution; Acoustics; Feature extraction; Predictive models; Acoustic measurements; Continuous emotion recognition; convolutional neural networks; delayed sinc layer; multi-delay sinc network

资金

  1. National Science Foundation [NSF CAREER 1651740]
  2. National Institutes of Health [R34MH100404, R21MH114835, UL1TR002240]
  3. HC Prechter Bipolar Program
  4. Richard Tam Foundation

向作者/读者索取更多资源

In this study, a new convolutional neural network called multi-delay sinc network is introduced, which can align and predict emotion labels simultaneously. By utilizing delayed sinc layers, the network is able to learn time-varying delays and achieve state-of-the-art speech results when predicting dimensional descriptors of emotions.
Time-continuous dimensional descriptions of emotions (e.g., arousal, valence) allow researchers to characterize short-time changes and to capture long-term trends in emotion expression. However, continuous emotion labels are generally not synchronized with the input speech signal due to delays caused by reaction-time, which is inherent in human evaluations. To deal with this challenge, we introduce a new convolutional neural network (multi-delay sinc network) that is able to simultaneously align and predict labels in an end-to-end manner. The proposed network is a stack of convolutional layers followed by an aligner network that aligns the speech signal and emotion labels. This network is implemented using a new convolutional layer that we introduce, the delayed sinc layer. It is a time-shifted low-pass (sinc) filter that uses a gradient-based algorithm to learn a single delay. Multiple delayed sinc layers can be used to compensate for a non-stationary delay that is a function of the acoustic space. We test the efficacy of this system on two common emotion datasets, RECOLA and SEWA, and show that this approach obtains state-of-the-art speech-only results by learning time-varying delays while predicting dimensional descriptors of emotions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据