4.5 Article

Object affordance based multimodal fusion for natural Human-Robot interaction

Journal

COGNITIVE SYSTEMS RESEARCH
Volume 54, Issue -, Pages 128-137

Publisher

ELSEVIER
DOI: 10.1016/j.cogsys.2018.12.010

Keywords

Object affordance recognition; Multimodal fusion; Natural Human-Robot interaction

Funding

  1. German Research Foundation (DFG)
  2. National Science Foundation (NSFC) in project Crossmodal Learning [Sonderforschungsbereich Transregio 169]
  3. DAAD German Academic Exchange Service under CASY project
  4. Horizon 2020 RISE project STEP2DYNA [691154]

Ask authors/readers for more resources

Spoken language based natural Human-Robot Interaction (HRI) requires robots to have the ability to understand spoken language, and extract the intention-related information from the working scenario. For grounding the intention-related object in the working environment, object affordance recognition could be a feasible way. To this end, we propose a dataset and a deep CNN based architecture to learn the human-centered object affordance. Furthermore, we present an affordance based multimodal fusion framework to realize intended object grasping according to the spoken instructions of human users. The proposed framework contains an intention semantics extraction module which is employed to extract the intention from spoken language, a deep Convolutional Neural Network (CNN) based object affordance recognition module which is applied to recognize human-centered object affordance, and a multimodal fusion module which is adopted to bridge the extracted intentions and the recognized object affordances. We also complete multiple intended object grasping experiments on a PR2 platform to validate the feasibility and practicability of the presented HRI framework. (C) 2018 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available