期刊
IEEE TRANSACTIONS ON POWER SYSTEMS
卷 36, 期 1, 页码 521-524出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TPWRS.2020.3030164
关键词
Deep reinforcement learning; soft actor critic; dynamic model parameter calibration; PMU; transient stability
资金
- SGCC Science and Technology Program [5700-201958523A-0-0-00]
The letter introduces a novel parameter calibration method based on off-policy deep reinforcement learning algorithm, which can automatically adjust incorrect parameter sets considering multiple events, saving a significant amount of work and improving model accuracy. The effectiveness of the proposed approach is verified through numerical experiments conducted on a realistic power plant model.
Maintaining good quality of transient stability models for power system planning and operational analysis is of great importance. Identification and calibration of bad parameters using PMU measurements that work well for multiple events remains a challenging problem. In this letter, we present a novel parameter calibration method based on off-policy deep reinforcement learning (DRL) algorithm with maximum entropy, soft actor critic (SAC), to automatically tune incorrect parameter sets considering multiple events simultaneously, which can save tremendous labor efforts for maintaining model accuracy and complying with industry standards. The effectiveness of the proposed approach is verified through numerical experiments conducted on a realistic power plant model.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据