4.7 Article

Risk-informed operation and maintenance of complex lifeline systems using parallelized multi-agent deep Q-network

Journal

RELIABILITY ENGINEERING & SYSTEM SAFETY
Volume 239, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.ress.2023.109512

Keywords

Deep reinforcement learning; Lifeline systems; Life-cycle cost; Markov decision process; Operation & amp; maintenance; Parallel processing

Ask authors/readers for more resources

A multiagent deep reinforcement learning framework called parallelized multi-agent deep Q-network (PM-DQN) is proposed to overcome the curse of dimensionality in complex systems. The method divides the system into multiple subsystems, with each agent learning the operation and maintenance policy of the corresponding subsystem. The learning processes occur simultaneously in parallel units, and the trained policies are periodically synchronized to improve the master policy. Numerical examples demonstrate that the proposed method outperforms baseline policies.
Lifeline systems such as transportation and water distribution networks may deteriorate with age, raising the risk of system failure or degradation. Thus, system-level sequential decision-making is essential to address the problem cost-effectively while minimizing the potential loss. Researchers have proposed to assess the risk of lifeline systems using Markov decision processes (MDPs) to identify a risk-informed operation and maintenance (O & M) policy. In complex systems with many components, however, it is potentially intractable to find MDP solutions because the numbers of states and action spaces increase exponentially. This paper proposes a multiagent deep reinforcement learning framework, termed parallelized multi-agent deep Q-network (PM-DQN), to overcome the curse of dimensionality. The proposed method takes a divide-and-conquer strategy, in which multiple subsystems are identified by community detection, and each agent learns to achieve the O & M policy of the corresponding subsystem. The agents establish policies to minimize the decentralized cost of the cluster unit, including the factorized cost. Such learning processes occur simultaneously in several parallel units, and the trained policies are periodically synchronized with the best ones, thereby improving the master policy. Numerical examples demonstrate that the proposed method outperforms baseline policies, including conventional maintenance schemes and the subsystem-level optimal policy.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available