4.7 Article

Deep Reinforcement Learning Based Bi-layer Optimal Scheduling for Microgrids Considering Flexible Load Control

Journal

CSEE JOURNAL OF POWER AND ENERGY SYSTEMS
Volume 9, Issue 3, Pages 949-962

Publisher

CHINA ELECTRIC POWER RESEARCH INST
DOI: 10.17775/CSEEJPES.2021.06120

Keywords

Microgrids; Scheduling; Renewable energy sources; Optimization; Load modeling; Costs; Reinforcement learning; Bi-layer optimal scheduling; demand response; deep reinforcement learning; microgrid scheduling

Ask authors/readers for more resources

In this paper, a bi-layer scheduling method for microgrids based on deep reinforcement learning is proposed to achieve economic and environmentally friendly operations. The framework of day-ahead and intra-day scheduling is established, and the implementation scheme for price-based and incentive-based demand response for the flexible load is determined. The bi-layer scheduling model of the microgrid is established, and the particle swarm optimization algorithm is used for day-ahead scheduling while the deep reinforcement learning algorithm is adopted for intra-day online scheduling. The proposed method is verified using actual microgrid data and shown to achieve optimization of scheduling cost and calculation speed, suitable for microgrid online scheduling.
In this paper, the bi-layer scheduling method for microgrids, based on deep reinforcement learning, is proposed to achieve economic and environmentally friendly operations. First, considering the uncertainty of renewable energy, the framework of day-ahead and intra-day scheduling is established, and the implementation scheme for both price-based and incentive-based demand response (DR) for the flexible load is determined. Then, comprehensively considering the operating characteristics of the microgrid in the day-ahead and intra-day time scales, a bi-layer scheduling model of the microgrid is established. In terms of algorithms, since day-ahead scheduling has no strict requirement for dispatching time, the particle swarm optimization (PSO) algorithm is used to optimize the time-of-use electricity price and distributed power output for the next day. Considering the environmental fluctuations and requirements for rapidity of intra-day online scheduling, the deep reinforcement learning (DRL) algorithm is adopted for optimization. Finally, based on the data from the actual microgrid, the rationality and effectiveness of the proposed scheduling method is verified. The results show that the proposed bi-layer scheduling based on the PSO and DRL algorithms achieves the optimization of scheduling cost and calculation speed, and is suitable for microgrid online scheduling.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available