4.7 Article

Predictive control of power demand peak regulation based on deep reinforcement learning

期刊

JOURNAL OF BUILDING ENGINEERING
卷 75, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.jobe.2023.106992

关键词

Deep reinforcement learning; Electricity demand; Predictive control

向作者/读者索取更多资源

As urbanization continues to accelerate, effective management of peak electricity demand is crucial to avoid power outages and system overloads. In order to address this challenge, a novel model-free predictive control method called D2PC-DDPG is proposed, which combines deep reinforcement learning and optimal control of energy storage systems. Experimental results demonstrate the superior performance of the proposed method in prediction accuracy and control performance compared to traditional machine learning and reinforcement learning methods. The method also shows generalizability in reducing peak load in multiple regions.
As urbanization continues to accelerate, effectively managing peak electricity demand becomes increasingly critical to avoid power outages and system overloads that can negatively impact both buildings and power systems. To tackle this challenge, we propose a novel model-free predictive control method called Dynamic Dual Predictive Control-Deep Deterministic Policy Gradient (D2PC-DDPG) based on a deep reinforcement learning framework. Our method employs the Deep Forest-Deep Q-Network (DF-DQN) model to predict electricity demand across multiple buildings, and based on the output of the DF-DQN model, applies the Deep Deterministic Policy Gradient (DDPG) algorithm to optimize coordinated control of energy storage systems, including hot and chilled water storage tanks in multiple buildings. Experimental results show that our proposed DF-DQN model outperforms other traditional machine learning, deep learning, and reinforcement learning methods in terms of prediction accuracy, such as mean absolute error (MAE), mean absolute percentage error (MAPE), and root mean square error (RMSE). Moreover, our D2PC-DDPG method achieves superior control performance and peak load reduction compared to other reinforcement learning methods and an RBC-based control method. Specifically, our method successfully reduced peak load by 27.1% and 21.4% over a two-week period in the same regions. To demonstrate the generalizability of our D2PC-DDPG method, we tested it in five different regions and compared its performance with an RBC-based control method. The results showed that our method achieved an average reduction of 16.6%, 7%, 9.2%, and 11% for ramping, 1-load_factor, average_daily_peak, and peak_demand, respectively. These findings demonstrate the effectiveness and practicality of our proposed method in addressing critical energy management issues in various urban environments.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据