4.6 Article

Where Should I Walk? Predicting Terrain Properties From Images Via Self-Supervised Learning

Journal

IEEE ROBOTICS AND AUTOMATION LETTERS
Volume 4, Issue 2, Pages 1509-1516

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2019.2895390

Keywords

Semantic scene understanding; visual-based navigation; visual learning

Categories

Funding

  1. Intel Network on Intelligent Systems

Ask authors/readers for more resources

Legged robots have the potential to traverse diverse and rugged terrain. To find a safe and efficient navigation path and to carefully select individual footholds, it is useful to be able to predict properties of the terrain ahead of the robot. In this letter, we propose a method to collect data from robot-terrain interaction and associate it to images. Using sparse data acquired in teleoperation experiments with a quadrupedal robot, we train a neural network to generate a dense prediction of the terrain properties in front of the robot. To generate training data, we project the foothold positions from the robot trajectory into on-board camera images. We then attach labels to these footholds by identifying the dominant features of the force-torque signal measured with sensorized feet. We show that data collected in this fashion can be used to train a convolutional network for terrain property prediction as well as weakly supervised semantic segmentation. Finally, we show that the predicted terrain properties can be used for autonomous navigation of the ANYmal quadruped robot.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available