4.7 Article

Target-Value-Competition-Based Multi-Agent Deep Reinforcement Learning Algorithm for Distributed Nonconvex Economic Dispatch

期刊

IEEE TRANSACTIONS ON POWER SYSTEMS
卷 38, 期 1, 页码 204-217

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TPWRS.2022.3159825

关键词

Deep neural network; distributed economic dispatch; multi-agent deep reinforcement learning; nonconvex optimization

向作者/读者索取更多资源

A multi-agent coordinated deep reinforcement learning algorithm is proposed to solve distributed nonconvex economic dispatch problems. Agents run independent reinforcement learning algorithms and update their local Q-functions with a newly defined joint reward. The double network structure is adopted to approximate the Q-function, allowing the offline trained model to provide recommended power outputs for time-varying demands in real-time. The algorithm introduces a reward network to establish a competition mechanism and achieve coordination among agents, resulting in well-converged Q-network losses. Theoretical analysis and case studies demonstrate the advantages compared to existing approaches.
With the increasing expansion of the power grid, economic dispatch problems have received considerable attention. A multi-agent coordinated deep reinforcement learning algorithm is proposed to deal with distributed nonconvex economic dispatch problems. In the algorithm, agents run independent reinforcement learning algorithms and update their local Q-functions with a newly defined joint reward. The double network structure is adopted to approximate the Q-function so that the offline trained model can be used online to provide recommended power outputs for time-varying demands in real-time. By introducing the reward network, the competition mechanism between the reward network and the target network is established to determine a progressively stable target value, which achieves coordination among agents and pledges the losses of the Q-networks to converge well. Theoretical analysis is given and case studies are conducted to prove the advantages compared with existing approaches.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据