4.6 Article

A Reinforcement Learning Approach to Dynamic Scheduling in a Product-Mix Flexibility Environment

Journal

IEEE ACCESS
Volume 8, Issue -, Pages 106542-106553

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2020.3000781

Keywords

Manufacturing execution system; dynamic scheduling; machine learning; reinforcement learning; Q-learning

Funding

  1. Ministry of Science and Technology, Taiwan [MOST 107-2221-E-007-076-MY2]

Ask authors/readers for more resources

Machine bottlenecks, resulting from shifting and unbalanced machine loads caused by resource capacity limitations, impair product-mix flexibility production systems. Thus, the knowledge base (KB) of a dynamic scheduling control system should be dynamic and include a knowledge revision mechanism for monitoring crucial changes that occur in the production system. In this paper, reinforcement learning (RL)-based dynamic scheduling and a selection mechanism for multiple dynamic scheduling rules (MDSRs) are proposed to support the operating characteristics of a flexible manufacturing system (FMS) and semiconductor wafer fabrication (FAB). The proposed RL-based dynamic scheduling MDSR selection mechanism consisted of initial MDSR KB generation and revision phases. According to various performance criteria, the presented approach yields a system performance that is superior to those of the fixed-decision scheduling approach, the machine learning classification approach, and the classical MDSR selection mechanism.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available