4.6 Article

Goal- Driven Autonomous Exploration Through Deep Reinforcement Learning

Journal

IEEE ROBOTICS AND AUTOMATION LETTERS
Volume 7, Issue 2, Pages 730-737

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2021.3133591

Keywords

AI-enabled robotics; reinforcement learning; sensor-based Control

Categories

Funding

  1. Ministry of Trade, Industry and Energy (MOTIE) [10080638]
  2. National Research Foundation, Ministry of Science and ICT, South Korea [2020M3H8A1114945]
  3. Korea Evaluation Institute of Industrial Technology (KEIT) [10080638] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)
  4. National Research Foundation of Korea [2020M3H8A1114945] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

Ask authors/readers for more resources

This letter presents an autonomous navigation system for exploring unknown environments using deep reinforcement learning (DRL). The system selects optimal waypoints and learns a motion policy for local navigation to guide the robot towards a global goal. Experimental results show that the proposed method outperforms similar exploration methods in complex static and dynamic environments.
In this letter, we present an autonomous navigation system for goal-driven exploration of unknown environments through deep reinforcement learning (DRL). Points of interest (POI) for possible navigation directions are obtained from the environment and an optimal way point is selected, based on the available data. Following the waypoints, the robot is guided towards the global goal and the local optimum problem of reactive navigation is mitigated. Then, a motion policy for local navigation is learned through a DRL framework in a simulation. We develop a navigation system where this learned policy is integrated into a motion planning stack as the local navigation layer to move the robot between waypoints towards a global goal. The fully autonomous navigation is performed without any prior knowledge while a map is recorded as the robot moves through the environment. Experiments show that the proposed method has an advantage over similar exploration methods, without reliance on a map or prior information in complex static as well as dynamic environments.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available