3.8 Proceedings Paper

3D Semantic Label Transfer in Human-Robot Collaboration

出版社

IEEE COMPUTER SOC
DOI: 10.1109/ICCVW54120.2021.00294

关键词

-

资金

  1. Hungarian Ministry of Innovation and Technology NRDI Office

向作者/读者索取更多资源

The research addresses two practical problems in robotic scene understanding: high computational requirements of algorithms and misclassification of objects due to viewpoint differences. It proposes a system for sharing and reusing 3D semantic information between agents with different viewpoints, enabling simpler robots to gain 3D semantic understanding.
We tackle two practical problems in robotic scene understanding. First, the computational requirements of current semantic segmentation algorithms are prohibitive for typical robots. Second, the viewpoints of ground robots are quite different from the typical human viewpoints of training datasets which may lead to misclassified objects from robot viewpoints. We present a system for sharing and reusing 3D semantic information between multiple agents with different viewpoints. We first co-localize all agents in the same coordinate system. Next, we create a 3D dense semantic model of the space from human viewpoints close to real time. Finally, by re-rendering the model's semantic labels (and/or depth maps) from the ground robots' own estimated viewpoints and sharing them over the network, we can give 3D semantic understanding to simpler agents. We evaluate the reconstruction quality and show how tiny robots can reuse knowledge about the space collected by more capable peers.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据