期刊
JOURNAL OF MODERN POWER SYSTEMS AND CLEAN ENERGY
卷 9, 期 5, 页码 1101-1110出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.35833/MPCE.2020.000557
关键词
Uncertainty; Optimization; Wind turbines; Real-time systems; Load flow; Reinforcement learning; Programming; Deep reinforcement learning (DRL); optimal power flow (OPF); wind turbine; distribution network
This study proposes a DRL-based approach for analyzing the optimal power flow of distribution networks embedded with renewable energy and storage devices. The approach utilizes advanced algorithms to solve nonlinear programming problems and improve decision flexibility and performance.
This study proposes a deep reinforcement learning (DRL) based approach to analyze the optimal power flow (OPF) of distribution networks (DNs) embedded with renewable energy and storage devices. First, the OPF of the DN is formulated as a stochastic nonlinear programming problem. Then, the multi-period nonlinear programming decision problem is formulated as a Markov decision process (MDP), which is composed of multiple single-time-step sub-problems. Subsequently, the state-of-the-art DRL algorithm, i.e., proximal policy optimization (PPO), is used to solve the MDP sequentially considering the impact on the future. Neural networks are used to extract operation knowledge from historical data offline and provide online decisions according to the real-time state of the DN. The proposed approach fully exploits the historical data and reduces the influence of the prediction error on the optimization results. The proposed real-time control strategy can provide more flexible decisions and achieve better performance than the pre-determined ones. Comparative results demonstrate the effectiveness of the proposed approach.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据