4.6 Article

A Deep Q-Network for the Beer Game: Deep Reinforcement Learning for Inventory Optimization

期刊

出版社

INFORMS
DOI: 10.1287/msom.2020.0939

关键词

inventory optimization; reinforcement learning; beer game

资金

  1. National Science Foundation Extreme Science and Engineering Discovery Environment (XSEDE) [DDM180004, IRI180020]
  2. Division of Civil, Mechanical and Manufacturing Innovation [1663256]
  3. Division of Computing and Communication Foundations [1618717, 1740796]
  4. Directorate For Engineering
  5. Div Of Civil, Mechanical, & Manufact Inn [1663256] Funding Source: National Science Foundation
  6. Division of Computing and Communication Foundations
  7. Direct For Computer & Info Scie & Enginr [1618717] Funding Source: National Science Foundation
  8. Division of Computing and Communication Foundations
  9. Direct For Computer & Info Scie & Enginr [1740796] Funding Source: National Science Foundation

向作者/读者索取更多资源

The beer game is useful to demonstrate concepts in supply chain management. Experiment results show that using a deep reinforcement learning algorithm can lead to near-optimal order quantities, especially when other agents adopt a more realistic human ordering behavior model.
Problem definition: The beer game is widely used in supply chain management classes to demonstrate the bullwhip effect and the importance of supply chain coordination. The game is a decentralized, multiagent, cooperative problem that can be modeled as a serial supply chain network in which agents choose order quantities while cooperatively attempting to minimize the network's total cost, although each agent only observes local information. Academiclpractical relevance: Under some conditions, a base-stock replenishment policy is optimal. However, in a decentralized supply chain in which some agents act irrationally, there is no known optimal policy for an agent wishing to act optimally. Methodology: We propose a deep reinforcement learning (RL) algorithm to play the beer game. Our algorithm makes no assumptions about costs or other settings. As with any deep RL algorithm, training is computationally intensive, but once trained, the algorithm executes in real time. We propose a transfer-learning approach so that training performed for one agent can be adapted quickly for other agents and settings. Results: When playing with teammates who follow a base-stock policy, our algorithm obtains near-optimal order quantities. More important, it performs significantly better than a base-stock policy when other agents use a more realistic model of human ordering behavior. We observe similar results using a real-world data set. Sensitivity analysis shows that a trained model is robust to changes in the cost coefficients. Finally, applying transfer learning reduces the training time by one order of magnitude. Managerial implications: This paper shows how artificial intelligence can be applied to inventory optimization. Our approach can be extended to other supply chain optimization problems, especially those in which supply chain partners act in irrational or unpredictable ways. Our RL agent has been integrated into a new online beer game, which has been played more than 17,000 times by more than 4,000 people.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据