4.3 Article

Proof-of-concept of a reinforcement learning framework for wind farm energy capture maximization in time-varying wind

期刊

出版社

AIP Publishing
DOI: 10.1063/5.0043091

关键词

-

资金

  1. Envision Energy [A16-0094-001]
  2. National Renewable Energy Laboratory
  3. U.S. Department of Energy (DOE) [DE-AC36-08GO28308]
  4. U.S. Department of Energy Office of Energy Efficiency and Renewable Energy Wind Energy Technologies Office

向作者/读者索取更多资源

This paper presents a proof-of-concept distributed reinforcement learning framework for maximizing wind farm energy capture, utilizing Q-learning in a wake-delayed wind farm environment with time-varying wind conditions. The proposed algorithm modifications create the GARLIC framework for optimizing wind farm energy capture in time-varying conditions, which is compared to the static lookup table wind farm controller baseline FLORIS.
In this paper, we present a proof-of-concept distributed reinforcement learning framework for wind farm energy capture maximization. The algorithm we propose uses Q-Learning in a wake-delayed wind farm environment and considers time-varying, though not yet fully turbulent, wind inflow conditions. These algorithm modifications are used to create the Gradient Approximation with Reinforcement Learning and Incremental Comparison (GARLIC) framework for optimizing wind farm energy capture in time-varying conditions, which is then compared to the FLOw Redirection and Induction in Steady State (FLORIS) static lookup table wind farm controller baseline.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据