4.7 Article

Deep reinforcement learning for continuous wood drying production line control

Journal

COMPUTERS IN INDUSTRY
Volume 154, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.compind.2023.104036

Keywords

Deep reinforcement learning; Production control; Robustness; Discrete-event simulation; Forest-products industry

Ask authors/readers for more resources

Continuous high-frequency wood drying, integrated with a traditional wood finishing line, improves the value of lumber by correcting moisture content piece by piece. Using reinforcement learning for continuous drying operation policies outperforms current industry methods and remains robust to sudden disturbances.
Continuous high-frequency wood drying, when integrated with a traditional wood finishing line, allows correcting moisture content one piece of lumber at a time in order to improve its value. However, the integration of this precision drying process complicates sawmills logistics. The high stochasticity of lumber properties and less than ideal lumber routing decisions may cause bottlenecks and reduces productivity. To counteract this problem and fully exploit the technology, we propose to use reinforcement learning (RL) for learning continuous drying operation policies. An RL agent interacts with a simulated model of the finishing line to optimize its policies. Our results, based on multiple simulations, show that the learned policies outperform the heuristic currently used in industry and are robust to sudden disturbances which frequently occur in real contexts.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available