4.2 Article

Reinforcement learning-based dynamic obstacle avoidance and integration of path planning

期刊

INTELLIGENT SERVICE ROBOTICS
卷 14, 期 5, 页码 663-677

出版社

SPRINGER HEIDELBERG
DOI: 10.1007/s11370-021-00387-2

关键词

Mobile robot; Navigation; Collision avoidance; Reinforcement learning; Deep learning

类别

资金

  1. Korea Institute for Advancement of Technology (KIAT) - Korea Government (MOTIE) [P0008473]

向作者/读者索取更多资源

This study proposes a framework for decentralized collision avoidance using deep reinforcement learning, where each agent can independently make decisions; the mobile robot agents learn how to avoid obstacles and reach target points in environments with irregularly moving dynamic obstacles to test the effectiveness of the proposed method.
Deep reinforcement learning has the advantage of being able to encode fairly complex behaviors by collecting and learning empirical information. In the current study, we have proposed a framework for reinforcement learning in decentralized collision avoidance where each agent independently makes its decision without communication with others. In an environment exposed to various kinds of dynamic obstacles with irregular movements, mobile robot agents could learn how to avoid obstacles and reach a target point efficiently. Moreover, a path planner was integrated with the reinforcement learning-based obstacle avoidance to solve the problem of not finding a path in a specific situation, thereby imposing path efficiency. The robots were trained about the policy of obstacle avoidance in environments where dynamic characteristics were considered with soft actor critic algorithm. The trained policy was implemented in the robot operating system (ROS), tested in virtual and real environments for the differential drive wheel robot to prove the effectiveness of the proposed method. Videos are available at .

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.2
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据