期刊
COMPUTERS & INDUSTRIAL ENGINEERING
卷 156, 期 -, 页码 -出版社
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.cie.2021.107221
关键词
Deep reinforcement learning; Order batching; Sequential decision making; Machine learning; Warehousing; E-commerce
In this paper, the authors propose a Deep Reinforcement Learning approach combined with heuristics to optimize order picking in warehouses, showing better performance than proposed heuristics in most cases and demonstrating a different learned strategy from hand-crafted heuristics.
In e-commerce markets, on-time delivery is of great importance to customer satisfaction. In this paper, we present a Deep Reinforcement Learning (DRL) approach, together with a heuristic, for deciding how and when arrived orders should be batched and picked in a warehouse to minimize the number of tardy orders. In particular, the technique facilitates making decisions on whether an order should be picked individually (pick-by-order) or picked in a batch with other orders (pick-by-batch), and if so, with which other orders. We approach the problem by formulating it as a semi-Markov decision process and developing a vector-based state representation that includes the characteristics of the warehouse system. This allows us to create a deep reinforcement learning solution that learns a strategy by interacting with the environment and solve the problem with a proximal policy optimization algorithm. We evaluate the performance of the proposed DRL approach by comparing it with several batching and sequencing heuristics in different problem settings. The results show that the DRL approach can develop a strategy that produces consistent, good solutions and performs better than the proposed heuristics in most of the tested cases. We show that the strategy learned by DRL is different from the hand-crafted heuristics. In this paper, we demonstrate that the benefits from recent advancements of Deep Reinforcement Learning can be transferred to solve sequential decision-making problems in warehousing operations.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据