3.8 Proceedings Paper

A Multimodal-Sensor-Enabled Room for Unobtrusive Group Meeting Analysis

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3242969.3243022

关键词

Multimodal sensing; smart rooms; time-of-flight sensing; head pose estimation; natural language processing; meeting summarization; group meeting analysis

资金

  1. National Science Foundation [HP-1631674]
  2. Northeastern University Tier 1 Seed Grant
  3. NSF [IIS-1829325]

向作者/读者索取更多资源

Group meetings can suffer from serious problems that undermine performance, including bias, groupthink, fear of speaking, and unfocused discussion. To better understand these issues, propose interventions, and thus improve team performance, we need to study human dynamics in group meetings. However, this process currently heavily depends on manual coding and video cameras. Manual coding is tedious, inaccurate, and subjective, while active video cameras can affect the natural behavior of meeting participants. Here, we present a smart meeting room that combines microphones and unobtrusive ceiling-mounted Time-of-Flight (ToF) sensors to understand group dynamics in team meetings. We automatically process the multimodal sensor outputs with signal, image, and natural language processing algorithms to estimate participant head pose, visual focus of attention (VFOA), non-verbal speech patterns, and discussion content. We derive metrics from these automatic estimates and correlate them with user-reported rankings of emergent group leaders and major contributors to produce accurate predictors. We validate our algorithms and report results on a new dataset of lunar survival tasks of 36 individuals across 10 groups collected in the multimodal-sensor-enabled smart room.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据