4.7 Article

A sequential decision problem formulation and deep reinforcement learning solution of the optimization of O&M of cyber-physical energy systems (CPESs) for reliable and safe power production and supply

Journal

RELIABILITY ENGINEERING & SYSTEM SAFETY
Volume 235, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.ress.2023.109231

Keywords

Cyber-Physical Energy System (CPES); Operation & Maintenance (O & M); Deep Reinforcement Learning (DRL); Nuclear Power Plant (NPP); Optimization; Advanced Lead -cooled Fast Reactor European; Demonstrator (ALFRED)

Ask authors/readers for more resources

This paper discusses the O&M strategies for reliable and safe production and supply of CPESs, considering the uncertainty in energy demand and supply due to renewable energy sources and the need to avoid severe accidents for safety reasons. A Deep Reinforcement Learning approach is developed to search for the best strategy, taking into account the health conditions and remaining useful life of system components, and possible accident scenarios. The approach integrates Proximal Policy Optimization and Imitation Learning, and incorporates a CPES model with component RUL estimator and failure process model. An application to the ALFRED reactor demonstrates that the optimal solution found by DRL outperforms state-of-the-art O&M policies.
The Operation & Maintenance (O&M) of Cyber-Physical Energy Systems (CPESs) is driven by reliable and safe production and supply, that need to account for flexibility to respond to the uncertainty in energy demand and also supply due to the stochasticity of Renewable Energy Sources (RESs); at the same time, accidents of severe consequences must be avoided for safety reasons. In this paper, we consider O&M strategies for CPES reliable and safe production and supply, and develop a Deep Reinforcement Learning (DRL) approach to search for the best strategy, considering the system components health conditions, their Remaining Useful Life (RUL), and possible accident scenarios. The approach integrates Proximal Policy Optimization (PPO) and Imitation Learning (IL) for training RL agent, with a CPES model that embeds the components RUL estimator and their failure process model. The novelty of the work lies in i) taking production plan into O&M decisions to implement maintenance and operate flexibly; ii) embedding the reliability model into CPES model to recognize safety related components and set proper maintenance RUL thresholds. An application, the Advanced Lead-cooled Fast Reactor European Demonstrator (ALFRED), is provided. The optimal solution found by DRL is shown to outperform those provided by state-of-the-art O&M policies.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available