4.7 Article

An Improved N-Step Value Gradient Learning Adaptive Dynamic Programming Algorithm for Online Learning

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2019.2919338

关键词

Adaptive dynamic programming (ADP); convergence analysis; eligibility traces; online learning; reinforcement learning; temporal difference (TD); value gradient learning (VGL)

资金

  1. Missouri University of Science and Technology Intelligent Systems Center
  2. Mary K. Finley Missouri Endowment
  3. National Science Foundation
  4. Lifelong Learning Machines Program from the DARPA/Microsystems Technology Office
  5. Army Research Laboratory (ARL)
  6. The U.S. Government [W911NF-18-2-0260]
  7. Higher Committee for Educational Development (HCED)
  8. Basra Oil Company (BOC), Iraq

向作者/读者索取更多资源

In problems with complex dynamics and challenging state spaces, the dual heuristic programming (DHP) algorithm has been shown theoretically and experimentally to perform well. This was recently extended by an approach called value gradient learning (VGL). VGL was inspired by a version of temporal difference (TD) learning that uses eligibility traces. The eligibility traces create an exponential decay of older observations with a decay parameter (lambda). This approach is known as TD(lambda), and its DHP extension is known as VGL(lambda), where VGL(0) is identical to DHP. VGL has presented convergence and other desirable properties, but it is primarily useful for batch learning. Online learning requires an eligibility-trace-work-space matrix, which is not required for the batch learning version of VGL. Since online learning is desirable for many applications, it is important to remove this computational and memory impediment. This paper introduces a dual-critic version of VGL, called N-step VGL (NSVGL), that does not need the eligibility-trace-workspace matrix, thereby allowing online learning. Furthermore, this combination of critic networks allows an NSVGL algorithm to learn faster. The first critic is similar to DHP, which is adapted based on TD(0) learning, while the second critic is adapted based on a gradient of n-step TD(lambda) learning. Both networks are combined to train an actor network. The combination of feedback signals from both critic networks provides an optimal decision faster than traditional adaptive dynamic programming (ADP) via mixing current information and event history. Convergence proofs are provided. Gradients of one- and n-step value functions are monotonically nondecreasing and converge to the optimum. Two simulation case studies are presented for NSVGL to show their superior performance.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据