4.6 Article

Composite Experience Replay-Based Deep Reinforcement Learning With Application in Wind Farm Control

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCST.2021.3102476

关键词

Wind farms; Training; Production; Heuristic algorithms; Artificial neural networks; Wind turbines; Wind speed; Intelligent control; model-free control; neural networks (NNs); reinforcement learning (RL); wind farm control

资金

  1. U.K. Engineering and Physical Sciences Research Council [EP/S001905/1]
  2. EPSRC [EP/S001905/1] Funding Source: UKRI

向作者/读者索取更多资源

In this study, a deep reinforcement learning-based control approach with enhanced learning efficiency and effectiveness is proposed to optimize the total power production of wind farms. By introducing a novel composite experience replay strategy and modified importance-sampling weights, the method successfully handles the challenges posed by strong wake effects among wind turbines and the stochastic features of environments, achieving higher rewards with less training costs compared to conventional deep RL-based control approaches.
In this article, a deep reinforcement learning (RL)-based control approach with enhanced learning efficiency and effectiveness is proposed to address the wind farm control problem. Specifically, a novel composite experience replay (CER) strategy is designed and embedded in the deep deterministic policy gradient (DDPG) algorithm. CER provides a new sampling scheme that can mine the information of stored transitions in-depth by making a tradeoff between rewards and temporal difference (TD) errors. Modified importance-sampling weights are introduced to the training process of neural networks (NNs) to deal with the distribution mismatching problem induced by CER. Then, our CER-DDPG approach is applied to optimizing the total power production of wind farms. The main challenge of this control problem comes from the strong wake effects among wind turbines and the stochastic features of environments, rendering it intractable for conventional control approaches. A reward regularization process is designed along with the CER-DDPG, which employs an additional NN to handle the bias of rewards caused by the stochastic wind speeds. Tests with a dynamic wind farm simulator (WFSim) show that our method achieves higher rewards with less training costs than conventional deep RL-based control approaches, and it has the ability to increase the total power generation of wind farms with different specifications.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据