4.3 Article

What You Should Know About Approximate Dynamic Programming

期刊

NAVAL RESEARCH LOGISTICS
卷 56, 期 3, 页码 239-249

出版社

WILEY
DOI: 10.1002/nav.20347

关键词

approximate dynamic programming; reinforcement learning; neuro-dynamic programming; stochastic optimization; Monte Carlo simulation

资金

  1. Air Force Office of Scientific Research [AFOSR-F49620-93-1-0098]

向作者/读者索取更多资源

Approximate dynamic programming (ADP) is a broad umbrella for a modeling and algorithmic strategy for solving problems that are sometimes large and complex, and are usually (but not always) stochastic. It is most often presented as a method for overcoming the classic curse of dimensionality that is well-known to plague the use of Bellman's equation. For many problems, there are actually up to three curses of dimensionality. But the richer message of approximate dynamic programming is learning what to learn, and how to learn it, to make better decisions over time. This article provides a brief review of approximate dynamic programming, without intending to be a complete tutorial. Instead, our goal is to provide a broader perspective of ADP and how it should be approached from the perspective of different problem classes. (C) 2009 Wiley Periodicals, Inc. Naval Research Logistics 56: 239-249,2009

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据