4.5 Article

Rolling horizon wind-thermal unit commitment optimization based on deep reinforcement learning

Journal

APPLIED INTELLIGENCE
Volume 53, Issue 16, Pages 19591-19609

Publisher

SPRINGER
DOI: 10.1007/s10489-023-04489-5

Keywords

Unit commitment; Rolling optimization; Deep reinforcement learning; Wind power; Stochastic uncertainty

Ask authors/readers for more resources

The growing use of renewable energy has posed significant challenges to the operation of modern power systems. Academic research and industrial practice have shown that adjusting unit commitment (UC) scheduling periodically based on new forecasts of renewable power can improve system stability and economy. However, this increases the computational burden. This paper proposes a deep reinforcement learning (DRL) method for obtaining timely and reliable solutions for rolling-horizon UC (RHUC). According to experimental results, the proposed algorithm reduces power system operation cost by at least 1.1% in a considerably shorter time compared to traditional methods.
The growing penetration of renewable energy has brought significant challenges for modern power system operation. Academic research and industrial practice show that adjusting unit commitment (UC) scheduling periodically according to new forecasts of renewable power provides a promising way to improve system stability and economy; however, this greatly increases the computational burden for solution methods. In this paper, a deep reinforcement learning (DRL) method is proposed to obtain timely and reliable solutions for rolling-horizon UC (RHUC). First, based on historical data and day-ahead point forecasting, a data-driven method is designed to construct typical wind power scenarios that are regarded as components of the state space of DRL. Second, a rolling mechanism is proposed to dynamically update the state space based on real-time wind power data. Third, unlike existing reinforcement learning-based UC solution methods that segment the continuous outputs of generators as discrete variables, all the variables in RHUC are regarded as continuous. Additionally, a series of updating regulations are defined to ensure that the model is realistic. Thus, a DRL algorithm, the twin delayed deep deterministic policy gradient (TD3), can be utilized to effectively solve the problem. Finally, several case studies are conducted based on different test systems to demonstrate the efficiency of the proposed method. According to the experimental results, the proposed algorithm can obtain high-quality solutions in a considerably shorter time than traditional methods, which leads to a reduction of at least 1.1% in the power system operation cost.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available