4.7 Article

Data-efficient deep reinforcement learning with expert demonstration for active flow control

期刊

PHYSICS OF FLUIDS
卷 34, 期 11, 页码 -

出版社

AIP Publishing
DOI: 10.1063/5.0120285

关键词

-

资金

  1. Natural Science Foundation of Zhejiang Province
  2. Fundamental Research Funds for the Central Universities
  3. [LY21A020010]
  4. [226-2022-00155]

向作者/读者索取更多资源

This paper describes the introduction of expert demonstration into a classic off-policy RL algorithm, enabling rapid learning of active flow control strategies for vortex-induced vibration problems.
Deep reinforcement learning (RL) is capable of identifying and modifying strategies for active flow control. However, the classic active formulation of deep RL requires lengthy active exploration. This paper describes the introduction of expert demonstration into a classic off-policy RL algorithm, the soft actor-critic algorithm, for application to vortex-induced vibration problems. This combined online-learning framework is applied to an oscillator wake environment and a Navier-Stokes environment with expert demonstration obtained from the pole-placement method and surrogate model optimization. The results show that the soft actor-critic framework combined with expert demonstration enables rapid learning of active flow control strategies through a combination of prior demonstration data and online experience. This study develops a new data-efficient RL approach for discovering active flow control strategies for vortex-induced vibration, providing a more practical methodology for industrial applications. Published under an exclusive license by AIP Publishing.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据