3.8 Proceedings Paper

An Enhanced Adversarial Network with Combined Latent Features for Spatio-temporal Facial Affect Estimation in the Wild

出版社

SCITEPRESS
DOI: 10.5220/0010332001720181

关键词

Affective Computing; Temporal Modelling; Adversarial Learning

资金

  1. Spanish Ministry of Economy and Competitiveness [TIN2017-90124-P]
  2. Maria de Maeztu Units of Excellence Programme [MDM-2015-0502]
  3. European Union [826506]

向作者/读者索取更多资源

Affective Computing has gained attention for its diverse applications, with the emergence of video data allowing for enriched spatial features and temporal information. However, high-dimensional feature spaces and large data volumes in spatio-temporal modeling pose training challenges. A novel model efficiently extracts spatial and temporal features through enhanced temporal modeling, showing competitive results in affect estimation with attention mechanisms playing a key role in improving accuracy.
Affective Computing has recently attracted the attention of the research community, due to its numerous applications in diverse areas. In this context, the emergence of video-based data allows to enrich the widely used spatial features with the inclusion of temporal information. However, such spatio-temporal modelling often results in very high-dimensional feature spaces and large volumes of data, making training difficult and time consuming. This paper addresses these shortcomings by proposing a novel model that efficiently extracts both spatial and temporal features of the data by means of its enhanced temporal modelling based on latent features. Our proposed model consists of three major networks, coined Generator, Discriminator, and Combiner, which are trained in an adversarial setting combined with curriculum learning to enable our adaptive attention modules. In our experiments, we show the effectiveness of our approach by reporting our competitive results on both the AFEW-VA and SEWA datasets, suggesting that temporal modelling improves the affect estimates both in qualitative and quantitative terms. Furthermore, we find that the inclusion of attention mechanisms leads to the highest accuracy improvements, as its weights seem to correlate well with the appearance of facial movements, both in terms of temporal localisation and intensity. Finally, we observe the sequence length of around 160 ms to be the optimum one for temporal modelling, which is consistent with other relevant findings utilising similar lengths.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据