4.6 Article

EMExplorer: an episodic memory enhanced autonomous exploration strategy with Voronoi domain conversion and invalid action masking

Journal

COMPLEX & INTELLIGENT SYSTEMS
Volume -, Issue -, Pages -

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s40747-023-01144-x

Keywords

Autonomous exploration; Episodic memory; Deep reinforcement learning; Generalized Voronoi diagram; Invalid action masking

Ask authors/readers for more resources

This paper proposes a novel Deep Reinforcement Learning (DRL) based autonomous exploration strategy, which efficiently reduces the unknown area of the workspace and provides accurate 2D map construction. The strategy utilizes the Generalized Voronoi Diagram (GVD) and Generalized Voronoi Networks (GVN) to design an autonomous exploration policy with spatial awareness and episodic memory. Invalid Action Masking (IAM) and a well-designed reward function are employed to cope with the expansion of the exploration range and guide the learning of policies. Extensive tests and experiments show the superiority of the strategy in terms of map quality and exploration speed.
Autonomous exploration is a critical technology to realize robotic intelligence as it allows unsupervised preparation for future tasks and facilitates flexible deployment. In this paper, a novel Deep Reinforcement Learning (DRL) based autonomous exploration strategy is proposed to efficiently reduce the unknown area of the workspace and provide accurate 2D map construction for mobile robots. Different from existing human-designed exploration techniques that usually make strong assumptions about the scenarios and the tasks, we utilize a model-free method to directly learn an exploration strategy through trial-and-error interactions with complex environments. To be specific, the Generalized Voronoi Diagram (GVD) is first utilized for domain conversion to obtain a high-dimensional Topological Environmental Representation (TER). Then, the Generalized Voronoi Networks (GVN) with spatial awareness and episodic memory is designed to learn autonomous exploration policies interactively online. For complete and efficient exploration, Invalid Action Masking (IAM) is employed to reshape the configuration space of exploration tasks to cope with the explosion of action space and observation space caused by the expansion of the exploration range. Furthermore, a well-designed reward function is leveraged to guide the learning of policies. Extensive baseline tests and comparative simulations show that our strategy outperforms the state-of-the-art strategies in terms of map quality and exploration speed. Sufficient ablation studies and mobile robot experiments demonstrate the effectiveness and superiority of our strategy.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available