3.8 Proceedings Paper

Inference of user-intention in remote robot wheelchair assistance using multimodal interfaces

出版社

IEEE
DOI: 10.1109/iros40897.2019.8968203

关键词

-

资金

  1. Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) [001]
  2. CEFET-MG
  3. Royal Academy of Engineering (RAEng) Chair in Emerging Technologies

向作者/读者索取更多资源

Shared control methodologies have the potential of enabling wheelchair-bound users with limited motor abilities to perform tasks that would usually be beyond their capabilities. Deriving such methodologies in advance is challenging, since they are frequently heavily dependent on unique characteristics of users. Learning Assistance by Demonstration paradigms allow derivation of customized policies by recording how remote human assistants assist particular users. However, for accurate determination of the optimal policies for each user and context, the remote assistant needs to infer the intention of the driver, which is frequently obscured by noisy signals dependent on the user's motor impairment. In this paper, we propose a multimodal teleoperation interface, incorporating map information, haptic feedback and user eye-gaze data, and examine which of these factors are most important for allowing accurate determination of user intention in a simulated tremor experiment. Our study indicates that, for expert assistants, presence of additional haptic and gaze information increases their ability to accurately infer the user's intention, providing supporting evidence for the utility of multimodal interfaces in remote assistance scenarios for Learning Assistance by Demonstration. Our study also reveals strong individual preferences on the different modalities, with large variations of performance occurring depending on whether supplemental eye-gaze or haptic information was given.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据