4.7 Article

Predictive control of power demand peak regulation based on deep reinforcement learning

Journal

JOURNAL OF BUILDING ENGINEERING
Volume 75, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.jobe.2023.106992

Keywords

Deep reinforcement learning; Electricity demand; Predictive control

Ask authors/readers for more resources

As urbanization continues to accelerate, effective management of peak electricity demand is crucial to avoid power outages and system overloads. In order to address this challenge, a novel model-free predictive control method called D2PC-DDPG is proposed, which combines deep reinforcement learning and optimal control of energy storage systems. Experimental results demonstrate the superior performance of the proposed method in prediction accuracy and control performance compared to traditional machine learning and reinforcement learning methods. The method also shows generalizability in reducing peak load in multiple regions.
As urbanization continues to accelerate, effectively managing peak electricity demand becomes increasingly critical to avoid power outages and system overloads that can negatively impact both buildings and power systems. To tackle this challenge, we propose a novel model-free predictive control method called Dynamic Dual Predictive Control-Deep Deterministic Policy Gradient (D2PC-DDPG) based on a deep reinforcement learning framework. Our method employs the Deep Forest-Deep Q-Network (DF-DQN) model to predict electricity demand across multiple buildings, and based on the output of the DF-DQN model, applies the Deep Deterministic Policy Gradient (DDPG) algorithm to optimize coordinated control of energy storage systems, including hot and chilled water storage tanks in multiple buildings. Experimental results show that our proposed DF-DQN model outperforms other traditional machine learning, deep learning, and reinforcement learning methods in terms of prediction accuracy, such as mean absolute error (MAE), mean absolute percentage error (MAPE), and root mean square error (RMSE). Moreover, our D2PC-DDPG method achieves superior control performance and peak load reduction compared to other reinforcement learning methods and an RBC-based control method. Specifically, our method successfully reduced peak load by 27.1% and 21.4% over a two-week period in the same regions. To demonstrate the generalizability of our D2PC-DDPG method, we tested it in five different regions and compared its performance with an RBC-based control method. The results showed that our method achieved an average reduction of 16.6%, 7%, 9.2%, and 11% for ramping, 1-load_factor, average_daily_peak, and peak_demand, respectively. These findings demonstrate the effectiveness and practicality of our proposed method in addressing critical energy management issues in various urban environments.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available