4.6 Article

Mobile Robot Navigation Using Deep Reinforcement Learning

Journal

PROCESSES
Volume 10, Issue 12, Pages -

Publisher

MDPI
DOI: 10.3390/pr10122748

Keywords

autonomous navigation; collision avoidance; reinforcement learning; mobile robots

Funding

  1. Ministry of Science and Technology (MOST) in Taiwan [108-2221-E-011-142]
  2. Center for Cyber-physical System Innovation from the Featured Areas Research Center Program

Ask authors/readers for more resources

This paper proposes an end-to-end approach that uses deep reinforcement learning for autonomous mobile robot navigation in an unknown environment. The mobile robot can learn collision avoidance and navigation capabilities using deep Q-network and double deep Q-network agents. The simulation and real-world experiments show that the robot can autonomously navigate and reach the target object location without colliding with obstacles.
Learning how to navigate autonomously in an unknown indoor environment without colliding with static and dynamic obstacles is important for mobile robots. The conventional mobile robot navigation system does not have the ability to learn autonomously. Unlike conventional approaches, this paper proposes an end-to-end approach that uses deep reinforcement learning for autonomous mobile robot navigation in an unknown environment. Two types of deep Q-learning agents, such as deep Q-network and double deep Q-network agents are proposed to enable the mobile robot to autonomously learn about collision avoidance and navigation capabilities in an unknown environment. For autonomous mobile robot navigation in an unknown environment, the process of detecting the target object is first carried out using a deep neural network model, and then the process of navigation to the target object is followed using the deep Q-network or double deep Q-network algorithm. The simulation results show that the mobile robot can autonomously navigate, recognize, and reach the target object location in an unknown environment without colliding with static and dynamic obstacles. Similar results are obtained in real-world experiments, but only with static obstacles. The DDQN agent outperforms the DQN agent in reaching the target object location in the test simulation by 5.06%.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available