期刊
IEEE SYSTEMS JOURNAL
卷 17, 期 3, 页码 4452-4463出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSYST.2023.3281524
关键词
Congestion management; deep reinforcement learning; demand side management (DSM); distribution network
The high penetration of distributed energy resources and heavy flexible loads has changed the operating conditions of the distribution network. This article proposes a deep deterministic policy gradient reinforcement learning scheme to alleviate congestion, which does not require explicit probabilistic models of the controllable loads. The results demonstrate the outperformance of the proposed technique in terms of the electricity cost and peak-to-average ratio of the load profiles.
The high penetration of distributed energy resources and heavy flexible loads, such as electric vehicles, has changed the operating conditions of the distribution network. In particular, the adoption of green energy innovations has caused an increase in electricity consumption. Such an increase can result in thermal overloading due to power flow exceeding a network asset's transfer capability, possibly damaging devices such as distribution transformers and feeders. It is very challenging to design a congestion management scheme given the uncertainty of flexible loads consumption and electricity prices. Obtaining stochastic models for such loads may not be easily available in practice. In this article, a deep deterministic policy gradient (DDPG) reinforcement learning (RL) scheme is proposed to alleviate congestion. DDPG RL is a model free technique that does not require explicit probabilistic models of the controllable loads to determine the change in electricity prices needed, in the form of tariffs and/or subsidies. The DDPG RL technique is compared with the existing model-based congestion management scheme using an IEEE 33 bus system and the results obtained demonstrate the out performance of the proposed technique in terms of the electricity cost and peak-to-average ratio of the load profiles.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据