期刊
IEEE TRANSACTIONS ON POWER SYSTEMS
卷 35, 期 4, 页码 3270-3273出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TPWRS.2020.2987292
关键词
Real-time optimal power flow (RT-OPF); Lagrangian-based deep reinforcement learning; near-constraint continuous control
资金
- Ministry of Education (MOE), Republic of Singapore [AcRF TIER 1 2019-T1-001-069 (RG75/19)]
- Nanyang Assistant Professorship from Nanyang Technological University, Singapore
High-level penetration of intermittent renewable energy sources has introduced significant uncertainties and variabilities into modern power systems. In order to rapidly and economically respond to the changes in power system operating state, this letter proposes a real-time optimal power flow (RT-OPF) approach using Lagrangian-based deep reinforcement learning (DRL) in continuous action domain. A DRL agent to determine RT-OPF decisions is constructed and optimized using the deep deterministic policy gradient. The DRL action-value function is designed to simultaneously model RT-OPF objective and constraints. Instead of using the critic network, the deterministic gradient is derived analytically. The proposed method is tested on the IEEE 118-bus system. Compared with the state-of-the-art methods, the proposed method can achieve a high solution optimality and constraint compliance in real-time.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据