3.8 Proceedings Paper

Multi-Modal Interaction for Space Telescience of Fluid Experiments

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3293663.3293672

关键词

Space telescience; Space fluid experiment; Single-Channel Speech Separation; Gesture recognition

资金

  1. Key Laboratory of Space Utilization, Chinese Academy of Sciences [Y7031661SY]
  2. National Natural Science Foundation of China [61502463]
  3. Research Fund of the Manned Space Engineering [18051030301]

向作者/读者索取更多资源

In this paper, a novel multi-modal interaction strategy for sequential multi-step operation processes in the space telescience experiments is proposed to provide a realistic 'virtual presence' and natural human-computer interface at the telescience ground facility. Due to the different fluid properties from the ground, the fluid in space is modeled as data-driven combined physical-based dynamic particles and rendered in 3D stereoscopic scenario in the CAVE and Oculus Rift at first. A single-channel speech separation method based on Deep Clustering with local optimization is proposed then to recover two or more individual speech signals from the mixed speech environment. Also, the speech recognition and the speech synthesis are both realized to make telecommands by voice. The task-command hierarchical interaction solution and the recognition algorithm of a series of understandable hand(s) gestures for somatosensory control with Leap Motion is proposed next for the less mental workload. Finally, the above interaction interfaces are integrated into the telescience experiment system. The results show the proposed multi-modal interaction method can provide a more efficient, natural and intuitive user experience compared with traditional interaction.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据