4.7 Article

Deep reinforcement learning for continuous wood drying production line control

期刊

COMPUTERS IN INDUSTRY
卷 154, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.compind.2023.104036

关键词

Deep reinforcement learning; Production control; Robustness; Discrete-event simulation; Forest-products industry

向作者/读者索取更多资源

Continuous high-frequency wood drying, integrated with a traditional wood finishing line, improves the value of lumber by correcting moisture content piece by piece. Using reinforcement learning for continuous drying operation policies outperforms current industry methods and remains robust to sudden disturbances.
Continuous high-frequency wood drying, when integrated with a traditional wood finishing line, allows correcting moisture content one piece of lumber at a time in order to improve its value. However, the integration of this precision drying process complicates sawmills logistics. The high stochasticity of lumber properties and less than ideal lumber routing decisions may cause bottlenecks and reduces productivity. To counteract this problem and fully exploit the technology, we propose to use reinforcement learning (RL) for learning continuous drying operation policies. An RL agent interacts with a simulated model of the finishing line to optimize its policies. Our results, based on multiple simulations, show that the learned policies outperform the heuristic currently used in industry and are robust to sudden disturbances which frequently occur in real contexts.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据