4.5 Article

Rolling horizon wind-thermal unit commitment optimization based on deep reinforcement learning

期刊

APPLIED INTELLIGENCE
卷 53, 期 16, 页码 19591-19609

出版社

SPRINGER
DOI: 10.1007/s10489-023-04489-5

关键词

Unit commitment; Rolling optimization; Deep reinforcement learning; Wind power; Stochastic uncertainty

向作者/读者索取更多资源

The growing use of renewable energy has posed significant challenges to the operation of modern power systems. Academic research and industrial practice have shown that adjusting unit commitment (UC) scheduling periodically based on new forecasts of renewable power can improve system stability and economy. However, this increases the computational burden. This paper proposes a deep reinforcement learning (DRL) method for obtaining timely and reliable solutions for rolling-horizon UC (RHUC). According to experimental results, the proposed algorithm reduces power system operation cost by at least 1.1% in a considerably shorter time compared to traditional methods.
The growing penetration of renewable energy has brought significant challenges for modern power system operation. Academic research and industrial practice show that adjusting unit commitment (UC) scheduling periodically according to new forecasts of renewable power provides a promising way to improve system stability and economy; however, this greatly increases the computational burden for solution methods. In this paper, a deep reinforcement learning (DRL) method is proposed to obtain timely and reliable solutions for rolling-horizon UC (RHUC). First, based on historical data and day-ahead point forecasting, a data-driven method is designed to construct typical wind power scenarios that are regarded as components of the state space of DRL. Second, a rolling mechanism is proposed to dynamically update the state space based on real-time wind power data. Third, unlike existing reinforcement learning-based UC solution methods that segment the continuous outputs of generators as discrete variables, all the variables in RHUC are regarded as continuous. Additionally, a series of updating regulations are defined to ensure that the model is realistic. Thus, a DRL algorithm, the twin delayed deep deterministic policy gradient (TD3), can be utilized to effectively solve the problem. Finally, several case studies are conducted based on different test systems to demonstrate the efficiency of the proposed method. According to the experimental results, the proposed algorithm can obtain high-quality solutions in a considerably shorter time than traditional methods, which leads to a reduction of at least 1.1% in the power system operation cost.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据