期刊
IEEE TRANSACTIONS ON SUSTAINABLE ENERGY
卷 14, 期 2, 页码 1088-1098出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSTE.2023.3245090
关键词
Resilience; Reactive power; Optimization; Mathematical models; Voltage control; Power distribution; Load modeling; Distribution system resilience; deep reinforcement learning; hierarchical operation; soft actor-critic (SAC)
This paper proposes a hierarchical combination model of deep reinforcement learning (DRL) and quadratic programming for distribution system restoration after major outages. DRL-trained controller determines optimal power dispatch of distributed energy resources, while grid-level quadratic programming checks grid constraints and performs critical restoration operation. Numerical studies show that this combined approach improves the speed of local operation and ensures network constraints satisfaction during restoration.
This paper proposes a model for hierarchical combination of deep reinforcement learning (DRL) with quadratic programming for distribution system restoration after major outages. In the proposed model, optimal power dispatch of a collection of distributed energy resources, called integrated hybrid resources (IHRs), is determined by a DRL-trained controller, while a grid-level quadratic programming problem checks grid constraints and performs critical restoration operation. DRL is implemented using Soft Actor-Critic (SAC) algorithm, which is shown to outperform the common Deep Deterministic Policy Gradient in continuous action spaces. The numerical studies, performed on the 123-bus test distribution system, demonstrates that the hierarchical combination of DRL and quadratic programming not only speeds up the local operation of multiple IHRs, but also ensures that the network constraints are satisfied during the restoration operation.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据