4.6 Article

Multi-Agent Reinforcement Learning for Real-Time Dynamic Production Scheduling in a Robot Assembly Cell

Journal

IEEE ROBOTICS AND AUTOMATION LETTERS
Volume 7, Issue 3, Pages 7684-7691

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2022.3184795

Keywords

Reinforcement learning; intelligent and flexible manufacturing; double DQN; flexible job shop scheduling problem (FJSP); multi-robot systems; mass personalisation

Categories

Ask authors/readers for more resources

With the rapid shift towards mass personalisation in the industry, there is a clear need for a decentralised multi-agent system capable of dynamic flexible job shop scheduling (FJSP). Traditional scheduling methods cannot achieve satisfactory results and are limited to static environments. Recent Reinforcement Learning (RL) approaches for dynamic FJSP lack flexibility and autonomy. In this study, we propose a Multi-Agent Reinforcement Learning (MARL) system for scheduling assembly jobs in a robot assembly cell, which demonstrates improved performance compared to rule-based heuristic methods.
As industry rapidly shifts towards mass personalisation, the need for a decentralised multi-agent system capable of dynamic flexible job shop scheduling (FJSP) is evident. Traditional heuristic and meta-heuristic scheduling methods cannot achieve satisfactory results and have limited application to static environments. Recent Reinforcement Learning (RL) approaches that consider dynamic FJSP, lack flexibility and autonomy as they use a single-agent centralised model, assuming global observability. As such, we propose a Multi-Agent Reinforcement Learning (MARL) system for scheduling dynamically arriving assembly jobs in a robot assembly cell. We applied a Double DQN-based algorithm and proposed a generalised observation, action and reward design for the dynamic FJSP setting. Using a centralised training phase, each agent (i.e., robot) in the assembly cell executes decentralised scheduling decisions based on local observations. Our solution demonstrated improved performance against rule-based heuristic methods, for optimising makespan. We also reported the impact of different observation sizes of each agent on optimisation performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available