4.6 Article

Learning Potential in Subgoal-Based Reward Shaping

Journal

IEEE ACCESS
Volume 11, Issue -, Pages 17116-17137

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2023.3246267

Keywords

Trajectory; Reinforcement learning; Human factors; Planning; Deep learning; Optimization; Machine learning algorithms; deep reinforcement learning; subgoals; reward shaping; potential-based reward shaping; subgoal-based reward shaping

Ask authors/readers for more resources

Human knowledge can reduce the number of iterations required in reinforcement learning, and the subgoal-based reward shaping method shows promise in certain domains. By learning the potential function through parameterization of a hyperparameter, we are able to accelerate value learning and obtain more effective results compared to baseline algorithms.
Human knowledge can reduce the number of iterations required to learn in reinforcement learning. Though the most common approach uses trajectories, it is difficult to acquire them in certain domains. Subgoals, which are intermediate states, have been studied instead of trajectories. Subgoal-based reward shaping is a method that adds rewards to environmental rewards with a sequence of subgoals. The potential function, which is a component of subgoal-based reward shaping, is shaped by a hyperparameter that controls its output. However, it is not easy to select a hyperparameter because its appropriate value depends on the reward function of an environment, and the reward function is unknown but its output is available. We propose learned potential that parameterizes a hyperparameter and acquires its potential through learning. A value is an expected accumulated reward if an agent follows its policy after the current state and is strongly related to the reward function. With learned potential, we build an abstract state space, which is a higher-level representation of the state, with a sequence of subgoals and use the value over the abstract states as the potential to accelerate the value learning. N-step temporal-difference (TD) method learns the values over the abstract state. We conducted experiments to evaluate the effectiveness of learned potential, and the results indicate its effectiveness compared with a baseline reinforcement learning algorithm and several reward-shaping algorithms. The results also indicate that the participants' subgoals are superior to subgoals generated randomly with learned potential. We discuss the appropriate number of subgoals for learned potential, that partially ordered subgoal is helpful for learned potential, that learned potential cannot make learning efficient in step penalized rewards, and that learned potential is superior to the non-learned potential in mixed positive and negative rewards.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available