Journal
PHYSICS OF FLUIDS
Volume 33, Issue 3, Pages -Publisher
AIP Publishing
DOI: 10.1063/5.0037371
Keywords
-
Categories
Funding
- Research Grants Council of Hong Kong under the General Research Fund [15249316, 15214418]
- Petromaks II project [280625]
Ask authors/readers for more resources
This study demonstrates the feasibility and effectiveness of deep reinforcement learning in active flow control under weakly turbulent conditions, achieving around 30% drag reduction and exploring optimal sensor network layout.
Machine learning has recently become a promising technique in fluid mechanics, especially for active flow control (AFC) applications. A recent work [Rabault et al., J. Fluid Mech. 865, 281-302 (2019)] has demonstrated the feasibility and effectiveness of deep reinforcement learning (DRL) in performing AFC over a circular cylinder at Re=100, i.e., in the laminar flow regime. As a follow-up study, we investigate the same AFC problem at an intermediate Reynolds number, i.e., Re=1000, where the weak turbulence in the flow poses great challenges to the control. The results show that the DRL agent can still find effective control strategies, but requires much more episodes in the learning. A remarkable drag reduction of around 30% is achieved, which is accompanied by elongation of the recirculation bubble and reduction of turbulent fluctuations in the cylinder wake. Furthermore, we also perform a sensitivity analysis on the learnt control strategies to explore the optimal layout of sensor network. To our best knowledge, this study is the first successful application of DRL to AFC in weakly turbulent conditions. It therefore sets a new milestone in progressing toward AFC in strong turbulent flows.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available