4.6 Article

Goal- Driven Autonomous Exploration Through Deep Reinforcement Learning

期刊

IEEE ROBOTICS AND AUTOMATION LETTERS
卷 7, 期 2, 页码 730-737

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2021.3133591

关键词

AI-enabled robotics; reinforcement learning; sensor-based Control

类别

资金

  1. Ministry of Trade, Industry and Energy (MOTIE) [10080638]
  2. National Research Foundation, Ministry of Science and ICT, South Korea [2020M3H8A1114945]
  3. Korea Evaluation Institute of Industrial Technology (KEIT) [10080638] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)
  4. National Research Foundation of Korea [2020M3H8A1114945] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

向作者/读者索取更多资源

This letter presents an autonomous navigation system for exploring unknown environments using deep reinforcement learning (DRL). The system selects optimal waypoints and learns a motion policy for local navigation to guide the robot towards a global goal. Experimental results show that the proposed method outperforms similar exploration methods in complex static and dynamic environments.
In this letter, we present an autonomous navigation system for goal-driven exploration of unknown environments through deep reinforcement learning (DRL). Points of interest (POI) for possible navigation directions are obtained from the environment and an optimal way point is selected, based on the available data. Following the waypoints, the robot is guided towards the global goal and the local optimum problem of reactive navigation is mitigated. Then, a motion policy for local navigation is learned through a DRL framework in a simulation. We develop a navigation system where this learned policy is integrated into a motion planning stack as the local navigation layer to move the robot between waypoints towards a global goal. The fully autonomous navigation is performed without any prior knowledge while a map is recorded as the robot moves through the environment. Experiments show that the proposed method has an advantage over similar exploration methods, without reliance on a map or prior information in complex static as well as dynamic environments.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据