3.8 Proceedings Paper

MEC 2016: The Multimodal Emotion Recognition Challenge of CCPR 2016

Journal

PATTERN RECOGNITION (CCPR 2016), PT II
Volume 663, Issue -, Pages 667-678

Publisher

SPRINGER-VERLAG SINGAPORE PTE LTD
DOI: 10.1007/978-981-10-3005-5_55

Keywords

Audio-visual corpus; Features; Multimodal fusion; Challenge; Emotion; Affective computing

Funding

  1. National High-Tech Research and Development Program of China (863 Program) [2015AA016305]
  2. National Natural Science Foundation of China (NSFC) [61305003, 61425017]
  3. Strategic Priority Research Program of the CAS [XDB02080006]
  4. Major Program for the National Social Science Fund of China [13 ZD189]

Ask authors/readers for more resources

Emotion recognition is a significant research filed of pattern recognition and artificial intelligence. The Multimodal Emotion Recognition Challenge (MEC) is a part of the 2016 Chinese Conference on Pattern Recognition (CCPR). The goal of this competition is to compare multimedia processing and machine learning methods for multimodal emotion recognition. The challenge also aims to provide a common benchmark data set, to bring together the audio and video emotion recognition communities, and to promote the research in multimodal emotion recognition. The data used in this challenge is the Chinese Natural Audio-Visual Emotion Database (CHEAVD), which is selected from Chinese movies and TV programs. The discrete emotion labels are annotated by four experienced assistants. Three sub-challenges are defined: audio, video and multimodal emotion recognition. This paper introduces the baseline audio, visual features, and the recognition results by Random Forests.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available