4.6 Article

Ensemble-based Deep Reinforcement Learning for robust cooperative wind farm control

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.ijepes.2022.108406

Keywords

Wind farm control; Deep reinforcement learning; Deep deterministic policy gradient; Learning cost; Ensemble learning

Funding

  1. Shenzhen Municipal Science and Technology innovation committee Basic Research Project, China [JCYJ20170410172224515]
  2. National Natural Science Foundation of China [42105145]
  3. Robotic Discipline Development Fund, China from the Shenzhen Government [2016-1418]

Ask authors/readers for more resources

The wake effect is a major obstacle in wind farm power generation, and choosing a suitable wake model that balances computational cost and accuracy is difficult. This study proposes an ensemble-based DRL wind farm control framework, introducing the Actor Bagging Deep Deterministic Policy Gradient algorithm to address the high cost issue of DRL. Experimental results show that this method can learn the optimal control policy with lower learning cost and a more robust learning process.
The wake effect is the major obstacle to reaching the maximum power generation for wind farms, since choosing the suitable wake model that satisfies both computational cost and accuracy is a difficult task. Deep Reinforcement Learning (DRL) is a powerful data-driven method that can learn the optimal control policy without modeling the environment. However, the trial and error'' mechanism of DRL may cause high costs during the learning process. To address this issue, we propose an ensemble-based DRL wind farm control framework. Under this framework, a new algorithm called Actor Bagging Deep Deterministic Policy Gradient (AB-DDPG) is proposed, which combines the actor-network bagging method with the Deep Deterministic Policy Gradient. The gradient of the proposed method is proved to be consistent with the DDPG method. The experiment results in WFSim show that AB-DDPG can learn the optimal control policy with lower learning cost and a more robust learning process.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available