期刊
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING
卷 17, 期 3, 页码 1420-1431出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TASE.2019.2956762
关键词
Job shop scheduling; Processor scheduling; Training; Artificial neural networks; Schedules; Flexible job-shop scheduling; multichip product (MCP); neural networks (NNs); reinforcement learning (RL); semiconductor manufacturing
资金
- National Research Foundation of Korea (NRF) - Korea Government (MSIP) [NRF-2015R1D1A1A01057496]
- Professional Consulting Group
- National Research Foundation of Korea [21A20130012638] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)
As semiconductor manufacturers, recently, have focused on producing multichip products (MCPs), scheduling semiconductor manufacturing operations become complicated due to the constraints related to reentrant production flows, sequence-dependent setups, and alternative machines. At the same time, the scheduling problems need to be solved frequently to effectively manage the variabilities in production requirements, available machines, and initial setup status. To minimize the makespan for an MCP scheduling problem, we propose a setup change scheduling method using reinforcement learning (RL) in which each agent determines setup decisions in a decentralized manner and learns a centralized policy by sharing a neural network among the agents to deal with the changes in the number of machines. Furthermore, novel definitions of state, action, and reward are proposed to address the variabilities in production requirements and initial setup status. Numerical experiments demonstrate that the proposed approach outperforms the rule-based, metaheuristic, and other RL methods in terms of the makespan while incurring shorter computation time than the metaheuristics considered. Note to Practitioners-This article studies a scheduling problem for die attach and wire bonding stages of a semiconductor packaging line. Due to the variabilities in production requirements, the number of available machines, and initial setup status, it is challenging for a scheduler to produce high-quality schedules within a specific time limit using existing approaches. In this article, a new scheduling method using reinforcement learning is proposed to enhance the robustness against the variabilities while achieving performance improvements. To verify the robustness of the proposed method, neural networks (NNs) trained on small-scale scheduling problems are used to solve large-scale scheduling problems. Experimental results show that the proposed method outperforms the existing approaches while requiring a short computation time. Furthermore, the trained NN performs well in solving unseen real-world scale problems even under stochastic processing time, suggesting the viability of the proposed method for real-world semiconductor packaging lines.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据