Journal
JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING
Volume 8, Issue 6, Pages 913-924Publisher
SPRINGER HEIDELBERG
DOI: 10.1007/s12652-016-0406-z
Keywords
Audio-visual database; Natural emotion; Corpus annotation; LSTM; Multimodal emotion recognition
Categories
Funding
- National HighTech Research and Development Program of China (863 Program) [2015AA016305]
- National Natural Science Foundation of China (NSFC) [61305003, 61425017]
- Strategic Priority Research Program of the CAS [XDB02080006]
- Major Program for the National Social Science Fund of China [13ZD189]
Ask authors/readers for more resources
This paper presents a recently collected natural, multimodal, rich-annotated emotion database, CASIA Chinese Natural Emotional Audio-Visual Database (CHEAVD), which aims to provide a basic resource for the research on multimodal multimedia interaction. This corpus contains 140 min emotional segments extracted from films, TV plays and talk shows. 238 speakers, aging from child to elderly, constitute broad coverage of speaker diversity, which makes this database a valuable addition to the existing emotional databases. In total, 26 non-prototypical emotional states, including the basic six, are labeled by four native speakers. In contrast to other existing emotional databases, we provide multi-emotion labels and fake/suppressed emotion labels. To our best knowledge, this database is the first large-scale Chinese natural emotion corpus dealing with multimodal and natural emotion, and free to research use. Automatic emotion recognition with Long Short-Term Memory Recurrent Neural Networks (LSTM-RNN) is performed on this corpus. Experiments show that an average accuracy of 56 % could be achieved on six major emotion states.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available