4.7 Article

Neural networks based reinforcement learning for mobile robots obstacle avoidance

Journal

EXPERT SYSTEMS WITH APPLICATIONS
Volume 62, Issue -, Pages 104-115

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2016.06.021

Keywords

Obstacle avoidance; Neural networks; Q-learning; Virtual reality

Funding

  1. Ministry of Labor, Family and Social Protection, Romania - European Social Fund - Investing in People, within the Sectoral Operational Programme Human Resources Development [POS-DRU/159/1.5/S/137070]

Ask authors/readers for more resources

This study proposes a new approach for solving the problem of autonomous movement of robots in environments that contain both static and dynamic obstacles. The purpose of this research is to provide mobile robots a collision-free trajectory within an uncertain workspace which contains both stationary and moving entities. The developed solution uses Q-learning and a neural network planner to solve path planning problems. The algorithm presented proves to be effective in navigation scenarios where global information is available. The speed of the robot can be set prior to the computation of the trajectory, which provides a great advantage in time-constrained applications. The solution is deployed in both Virtual Reality (VR) for easier visualization and safer testing activities, and on a real mobile robot for experimental validation. The algorithm is compared with Powerbot's ARNL proprietary navigation algorithm. Results show that the proposed solution has a good conversion rate computed at a satisfying speed. (C) 2016 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available