4.6 Article

Learning Affordance Segmentation for Real-World Robotic Manipulation via Synthetic Images

Journal

IEEE ROBOTICS AND AUTOMATION LETTERS
Volume 4, Issue 2, Pages 1140-1147

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2019.2894439

Keywords

Perception for Grasping and Manipulation; Deep Learning in Robotics and Automation; RGB-D Perception

Categories

Funding

  1. National Science Foundation Award [1605228]
  2. Directorate For Engineering
  3. Div Of Chem, Bioeng, Env, & Transp Sys [1605228] Funding Source: National Science Foundation

Ask authors/readers for more resources

This letter presents a deep learning framework to predict the affordances of object parts for robotic manipulation. The framework segments affordance maps by jointly detecting and localizing candidate regions within an image. Rather than requiring annotated real-world images, the framework learns from synthetic data and adapts to real-world data without supervision. Themethod learns domain-invariant region proposal networks and task-level domain adaptation components with regularization on the predicted domains. A synthetic version of the UMD data set is collected for autogenerating annotated, synthetic input data. Experimental results show that the proposed method outperforms an unsupervised baseline, and achieves performance close to state-of-the-art supervised approaches. An ablation study establishes the performance gap between the proposedmethod and the supervised equivalent (30%). Real-world manipulation experiments demonstrate use of the affordance segmentations for task execution, which achieves the same performance with supervised approaches.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available