4.6 Article

How to Train Your HERON

Journal

IEEE ROBOTICS AND AUTOMATION LETTERS
Volume 6, Issue 3, Pages 5247-5252

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2021.3065278

Keywords

Task analysis; Robot sensing systems; Adaptation models; Training; Meters; Vehicle dynamics; Lakes; Field robots; reinforcement learning; sensor-based control

Categories

Ask authors/readers for more resources

This letter demonstrates using Deep RL and Domain Randomization to successfully solve a navigation task in a natural environment, showing the model's ability to adapt and perform well without real world training. Additionally, the RL agent is proven to be more robust, faster, and more accurate compared to other methods.
In this letter we apply Deep Reinforcement Learning (Deep RL) and Domain Randomization to solve a navigation task in a natural environment relying solely on a 2D laser scanner. We train a model-based RL agent in simulation to follow lake and river shores and apply it on a real Unmanned Surface Vehicle in a zero-shot setup. We demonstrate that even though the agent has not been trained in the real world, it can fulfill its task successfully and adapt to changes in the robot's environment and dynamics. Finally, we show that the RL agent is more robust, faster, and more accurate than a state-aware Model-Predictive-Controller. Code, simulation environments, pre-trained models, and datasets are available at https://github.com/AntoineRichard/Heron-RL-ICRA.git.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available