4.7 Article

Fusion of facial expressions and EEG for implicit affective tagging

期刊

IMAGE AND VISION COMPUTING
卷 31, 期 2, 页码 164-174

出版社

ELSEVIER
DOI: 10.1016/j.imavis.2012.10.002

关键词

Emotion classification; EEG; Facial expressions; Signal processing; Pattern classification; Affective computing

资金

  1. Engineering and Physical Sciences Research Council [EP/G033935/1] Funding Source: researchfish
  2. EPSRC [EP/G033935/1] Funding Source: UKRI

向作者/读者索取更多资源

The explosion of user-generated, untagged multimedia data in recent years, generates a strong need for efficient search and retrieval of this data. The predominant method for content-based tagging is through slow, labor-intensive manual annotation. Consequently, automatic tagging is currently a subject of intensive research. However, it is clear that the process will not be fully automated in the foreseeable future. We propose to involve the user and investigate methods for implicit tagging, wherein users' responses to the interaction with the multimedia content are analyzed in order to generate descriptive tags. Here, we present a multi-modal approach that analyses both facial expressions and electroencephalography (EEG) signals for the generation of affective tags. We perform classification and regression in the valence-arousal space and present results for both feature-level and decision-level fusion. We demonstrate improvement in the results when using both modalities, suggesting the modalities contain complementary information. (C) 2012 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据