期刊
IMAGE AND VISION COMPUTING
卷 31, 期 2, 页码 164-174出版社
ELSEVIER
DOI: 10.1016/j.imavis.2012.10.002
关键词
Emotion classification; EEG; Facial expressions; Signal processing; Pattern classification; Affective computing
类别
资金
- Engineering and Physical Sciences Research Council [EP/G033935/1] Funding Source: researchfish
- EPSRC [EP/G033935/1] Funding Source: UKRI
The explosion of user-generated, untagged multimedia data in recent years, generates a strong need for efficient search and retrieval of this data. The predominant method for content-based tagging is through slow, labor-intensive manual annotation. Consequently, automatic tagging is currently a subject of intensive research. However, it is clear that the process will not be fully automated in the foreseeable future. We propose to involve the user and investigate methods for implicit tagging, wherein users' responses to the interaction with the multimedia content are analyzed in order to generate descriptive tags. Here, we present a multi-modal approach that analyses both facial expressions and electroencephalography (EEG) signals for the generation of affective tags. We perform classification and regression in the valence-arousal space and present results for both feature-level and decision-level fusion. We demonstrate improvement in the results when using both modalities, suggesting the modalities contain complementary information. (C) 2012 Elsevier B.V. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据