4.6 Article

RL and ANN Based Modular Path Planning Controller for Resource-Constrained Robots in the Indoor Complex Dynamic Environment

Journal

IEEE ACCESS
Volume 6, Issue -, Pages 74557-74568

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2018.2882875

Keywords

RL; ANN; complex dynamic indoor environment; modular path planning; resource-constrained robots

Funding

  1. CAS-TWAS President's Ph.D. Fellowship Programme, University of Chinese Academy of Sciences
  2. Innovation Project of Institute of Computing Technology, Chinese Academy of Sciences

Ask authors/readers for more resources

Traditional Reinforcement Learning (RL) approaches are designed to work well in static environments. In many real-world scenarios, the environments are complex and dynamic, in which the performance of traditional RL approaches may drastically degrade. One of the factors which results in the dynamicity and complexity of the environment is a change in the position and number of obstacles. This paper presents a path planning approach for autonomous mobile robots in a complex dynamic indoor environment, where the dynamic pattern of obstacles will not drastically affect the performance of RL models. Two independent modules, collision avoidance without considering the goal position and goal-seeking without considering obstacles avoidance, are trained independently using artificial neural networks and RL to obtain their best control policies. Then, a switching function is used to combine the two trained modules for realizing the obstacle avoidance and global path planning in a complex dynamic indoor environment. Furthermore, this control system is designed with a special focus on the computational and memory requirements of resource-constrained robots. The design was tested in a real-world environment on a mini-robot with constrained resources. Along with the static and dynamic obstacles' avoidance, this system has the ability to achieve both static and dynamic targets. This control system can also be used to train a robot in the real world using RL when the robot cannot afford to collide. Robot behavior in the real ground shows a very strong correlation with the simulation results.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available