4.7 Article

Deep hierarchical reinforcement learning based formation planning for multiple unmanned surface vehicles with experimental results

Journal

OCEAN ENGINEERING
Volume 286, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.oceaneng.2023.115577

Keywords

Deep reinforcement learning; Hierarchical reinforcement learning; Artificial potential field; Formation control; Unmanned surface vehicles

Ask authors/readers for more resources

This paper proposes a novel multi-USV formation path planning algorithm based on deep reinforcement learning. The algorithm utilizes goal-based hierarchical reinforcement learning and an improved artificial potential field algorithm to address training speed and planning conflicts, achieving optimal path planning and obstacle avoidance through a formation geometry model and a composite reward function.
In this paper, a novel multi-USV formation path planning algorithm is proposed based on deep reinforcement learning. First, a goal-based hierarchical reinforcement learning algorithm is designed to improve training speed and resolve planning conflicts within the formation. Second, an improved artificial potential field algorithm is designed in the training process to obtain the optimal path planning and obstacle avoidance learning scheme for multi-USVs in the determined perceptual environment. Finally, a formation geometry model is established to describe the physical relationships among USVs, and a composite reward function is proposed to guide the training. Numerous simulation tests are conducted, and the effectiveness of the proposed algorithm are further validated through the NEU-MSV01 experimental platform with a combination of parameterized Line of Sight (LOS) guidance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available