4.7 Article

Analytical Solution to a Discrete-Time Model for Dynamic Learning and Decision Making

期刊

MANAGEMENT SCIENCE
卷 68, 期 8, 页码 5924-5957

出版社

INFORMS
DOI: 10.1287/mnsc.2021.4194

关键词

learning and doing; sequential hypothesis testing; dynamic pricing with demand learning; multiarmed bandits; partially observable Markov decision processes

资金

  1. Natural Sciences and Engineering Research Council of Canada [RGPIN-2014-04979]

向作者/读者索取更多资源

This paper studies an infinite-horizon discrete-time model for dynamic learning and decision making problems. By adopting a new solution framework based on the efficient frontier of continuation-value vectors, the paper provides an analytical solution with structural properties analogous to continuous-time models and a useful tool for new discoveries in discrete-time models.
Problems concerning dynamic learning and decision making are difficult to solve analytically. We study an infinite-horizon discrete-time model with a constant unknown state that may take two possible values. As a special partially observable Markov decision process (POMDP), this model unifies several types of learning-and-doing problems such as sequential hypothesis testing, dynamic pricing with demand learning, and multiarmed bandits. We adopt a relatively new solution framework fromthe POMDP literature based on the backward construction of the efficient frontier(s) of continuation-value vectors. This framework accommodates different optimality criteria simultaneously. In the infinite-horizon setting, with the aid of a set of signal quality indices, the extreme points on the efficient frontier can be linked through a set of difference equations and solved analytically. The solution carries structural properties analogous to those obtained under continuous-time models, and it provides a useful tool for making new discoveries through discrete-time models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据