期刊
NEURAL INFORMATION PROCESSING, ICONIP 2016, PT II
卷 9948, 期 -, 页码 521-529出版社
SPRINGER INTERNATIONAL PUBLISHING AG
DOI: 10.1007/978-3-319-46672-9_58
关键词
EEG; Emotion recognition; Multimodal deep learning; Auto-encoder
To enhance the performance of affective models and reduce the cost of acquiring physiological signals for real-world applications, we adopt multimodal deep learning approach to construct affective models with SEED and DEAP datasets to recognize different kinds of emotions. We demonstrate that high level representation features extracted by the Bimodal Deep AutoEncoder (BDAE) are effective for emotion recognition. With the BDAE network, we achieve mean accuracies of 91.01% and 83.25% on SEED and DEAP datasets, respectively, which are much superior to those of the state-of-the-art approaches. By analysing the confusing matrices, we found that EEG and eye features contain complementary information and the BDAE network could fully take advantage of this complement property to enhance emotion recognition.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据