4.6 Article

Dual-Arm Robot Trajectory Planning Based on Deep Reinforcement Learning under Complex Environment

Journal

MICROMACHINES
Volume 13, Issue 4, Pages -

Publisher

MDPI
DOI: 10.3390/mi13040564

Keywords

dual-arm robot; deep reinforcement learning; trajectory planning; complex environment; reward

Funding

  1. National Natural Science Foundation of China [51741502, 11372073]
  2. Science and Technology Project of the Education Department of Jiangxi Province [GJJ200864]
  3. Jiangxi University of Science and Technology PhD Research Initiation Fund [205200100514]

Ask authors/readers for more resources

This article studies the trajectory planning of the dual-arm robot to approach the patient in a complex environment using deep reinforcement learning algorithms. It proposes a neural network trained with a proximal policy optimization algorithm and a continuous reward function. The research includes the use of a 3D simulation environment and a new reward and punishment function inspired by the artificial potential field concept. The results show that the proposed algorithm reduces training steps and achieves better rewards compared to other algorithms.
In this article, the trajectory planning of the two manipulators of the dual-arm robot is studied to approach the patient in a complex environment with deep reinforcement learning algorithms. The shape of the human body and bed is complex which may lead to the collision between the human and the robot. Because the sparse reward the robot obtains from the environment may not support the robot to accomplish the task, a neural network is trained to control the manipulators of the robot to prepare to hold the patient up by using a proximal policy optimization algorithm with a continuous reward function. Firstly, considering the realistic scene, the 3D simulation environment is built to conduct the research. Secondly, inspired by the idea of the artificial potential field, a new reward and punishment function was proposed to help the robot obtain enough rewards to explore the environment. The function is consisting of four parts which include the reward guidance function, collision detection, obstacle avoidance function, and time function. Where the reward guidance function is used to guide the robot to approach the targets to hold the patient, the collision detection and obstacle avoidance function are complementary to each other and are used to avoid obstacles, and the time function is used to reduce the number of training episode. Finally, after the robot is trained to reach the targets, the training results are analyzed. Compared with the DDPG algorithm, the PPO algorithm reduces about 4 million steps for training to converge. Moreover, compared with the other reward and punishment functions, the function used in this paper will obtain many more rewards at the same training time. Apart from that, it will take much less time to converge, and the episode length will be shorter; so, the advantage of the algorithm used in this paper is verified.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available