4.6 Article

Hierarchical Episodic Control

Journal

APPLIED SCIENCES-BASEL
Volume 13, Issue 20, Pages -

Publisher

MDPI
DOI: 10.3390/app132011544

Keywords

episodic memory; deep reinforcement learning; hierarchical reinforcement learning

Ask authors/readers for more resources

This paper proposes a hierarchical episodic control model to address the low training efficiency and high sample demand in deep reinforcement learning. By extending episodic memory to hierarchical reinforcement learning and employing a hierarchical implicit memory planning approach, the model effectively enhances training efficiency and shows notable improvements in different environments and cases of sparse rewards.
Deep reinforcement learning is one of the research hotspots in artificial intelligence and has been successfully applied in many research areas; however, the low training efficiency and high demand for samples are problems that limit the application. Inspired by the rapid learning mechanisms of the hippocampus, to address these problems, a hierarchical episodic control model extending episodic memory to the domain of hierarchical reinforcement learning is proposed in this paper. The model is theoretically justified and employs a hierarchical implicit memory planning approach for counterfactual trajectory value estimation. Starting from the final step and recursively moving back along the trajectory, a hidden plan is formed within the episodic memory. Experience is aggregated both along trajectories and across trajectories, and the model is updated using a multi-headed backpropagation similar to bootstrapped neural networks. This model extends the parameterized episodic memory framework to the realm of hierarchical reinforcement learning and is theoretically analyzed to demonstrate its convergence and effectiveness. Experiments conducted in four-room games, Mujoco, and UE4-based active tracking highlight that the hierarchical episodic control model effectively enhances training efficiency. It demonstrates notable improvements in both low-dimensional and high-dimensional environments, even in cases of sparse rewards. This model can enhance the training efficiency of reinforcement learning and is suitable for application scenarios that do not rely heavily on exploration, such as unmanned aerial vehicles, robot control, computer vision applications, and so on.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available