4.6 Article

Reinforcement Learning-Based Complete Area Coverage Path Planning for a Modified hTrihex Robot

Journal

SENSORS
Volume 21, Issue 4, Pages -

Publisher

MDPI
DOI: 10.3390/s21041067

Keywords

reconfigurable robot; tiling robots; reinforcement learning; complete coverage planing; energy path planning

Funding

  1. National Robotics Programme under its Robotics Enabling Capabilities and Technologies [1922500051]
  2. National Robotics Programme under Robot Domain Specific [1922200058]

Ask authors/readers for more resources

Cleaning robots must achieve complete area coverage, and tiling robots offer an innovative solution to this problem. This study proposes a complete area coverage planning module for a modified honeycomb-shaped tiling robot based on deep reinforcement learning, which simultaneously generates tiling shapes and trajectories with minimum overall cost.
One of the essential attributes of a cleaning robot is to achieve complete area coverage. Current commercial indoor cleaning robots have fixed morphology and are restricted to clean only specific areas in a house. The results of maximum area coverage are sub-optimal in this case. Tiling robots are innovative solutions for such a coverage problem. These new kinds of robots can be deployed in the cases of cleaning, painting, maintenance, and inspection, which require complete area coverage. Tiling robots' objective is to cover the entire area by reconfiguring to different shapes as per the area requirements. In this context, it is vital to have a framework that enables the robot to maximize the area coverage while minimizing energy consumption. That means it is necessary for the robot to cover the maximum area with the least number of shape reconfigurations possible. The current paper proposes a complete area coverage planning module for the modified hTrihex, a honeycomb-shaped tiling robot, based on the deep reinforcement learning technique. This framework simultaneously generates the tiling shapes and the trajectory with minimum overall cost. In this regard, a convolutional neural network (CNN) with long short term memory (LSTM) layer was trained using the actor-critic experience replay (ACER) reinforcement learning algorithm. The simulation results obtained from the current implementation were compared against the results that were generated through traditional tiling theory models that included zigzag, spiral, and greedy search schemes. The model presented in the current paper was also compared against other methods where this problem was considered as a traveling salesman problem (TSP) solved through genetic algorithm (GA) and ant colony optimization (ACO) approaches. Our proposed scheme generates a path with a minimized cost at a lesser time.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available