4.7 Article

A Transfer Learning Approach to Cross-Modal Object Recognition: From Visual Observation to Robotic Haptic Exploration

期刊

IEEE TRANSACTIONS ON ROBOTICS
卷 35, 期 4, 页码 987-998

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TRO.2019.2914772

关键词

Cross-modal object recognition; robotic manipulation; tactile perception; visual perception

类别

资金

  1. EU [659265]
  2. Marie Curie Actions (MSCA) [659265] Funding Source: Marie Curie Actions (MSCA)

向作者/读者索取更多资源

In this paper, we introduce the problem of cross-modal visuo-tactile object recognition with robotic active exploration. With this term, we mean that the robot observes a set of objects with visual perception, and later on, it is able to recognize such objects only with tactile exploration, without having touched any object before. Using a machine learning terminology, in our application, we have a visual training set and a tactile test set, or vice versa. To tackle this problem, we propose an approach constituted by four steps: finding a visuo-tactile common representation, defining a suitable set of features, transferring the features across the domains, and classifying the objects. We show the results of our approach using a set of 15 objects, collecting 40 visual examples and five tactile examples for each object. The proposed approach achieves an accuracy of 94.7%, which is comparable with the accuracy of the monomodal case, i.e., when using visual data both as training set and test set. Moreover, it performs well compared to the human ability, which we have roughly estimated carrying out an experiment with ten participants.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据