4.7 Article

Real-Time Optimal Power Flow: A Lagrangian Based Deep Reinforcement Learning Approach

期刊

IEEE TRANSACTIONS ON POWER SYSTEMS
卷 35, 期 4, 页码 3270-3273

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TPWRS.2020.2987292

关键词

Real-time optimal power flow (RT-OPF); Lagrangian-based deep reinforcement learning; near-constraint continuous control

资金

  1. Ministry of Education (MOE), Republic of Singapore [AcRF TIER 1 2019-T1-001-069 (RG75/19)]
  2. Nanyang Assistant Professorship from Nanyang Technological University, Singapore

向作者/读者索取更多资源

High-level penetration of intermittent renewable energy sources has introduced significant uncertainties and variabilities into modern power systems. In order to rapidly and economically respond to the changes in power system operating state, this letter proposes a real-time optimal power flow (RT-OPF) approach using Lagrangian-based deep reinforcement learning (DRL) in continuous action domain. A DRL agent to determine RT-OPF decisions is constructed and optimized using the deep deterministic policy gradient. The DRL action-value function is designed to simultaneously model RT-OPF objective and constraints. Instead of using the critic network, the deterministic gradient is derived analytically. The proposed method is tested on the IEEE 118-bus system. Compared with the state-of-the-art methods, the proposed method can achieve a high solution optimality and constraint compliance in real-time.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据