4.3 Article

Active flutter control of long-span bridges via deep reinforcement learning: A proof of concept

Journal

WIND AND STRUCTURES
Volume 36, Issue 5, Pages 321-331

Publisher

TECHNO-PRESS
DOI: 10.12989/was.2023.36.5.321

Keywords

active control; deep neural networks; flutter; long-span bridges; reinforcement learning

Ask authors/readers for more resources

This study proposes a nonlinear model-free controller based on deep reinforcement learning for active flutter control of long-span bridges. A deep neural network is used to approximate the nonlinear functions and map the system state to the control command. The performance of the proposed scheme is demonstrated through numerical examples.
Aeroelastic instability (i.e., flutter) is a critical issue that threatens the safety of flexible bridges with increasing span length. As a promising technique for flutter prevention, active aerodynamic control using auxiliary surfaces attached to the bridge deck (e.g., winglets and flaps) can be utilized to extract the stabilizing forces from the surrounding wind flow. Conventional controllers for the active aerodynamic control are usually designed using linear model-based schemes [e.g., linear quadratic regulator (LQR) and H-infinity control]. In addition to suffering from model inaccuracies, the obtained linear controller may not work well considering the high complexity of the inherently nonlinear wind-bridge-control system. To this end, this study proposes a nonlinear model-free controller based on deep reinforcement learning for active flutter control of long -span bridges. Specifically, a deep neural network (DNN), with the powerful ability to approximate nonlinear functions, is introduced to map from the system state (e.g., the motion of bridge deck) to the control command (e.g., reference position of the actively controlled surface). The DNN weights are obtained by interacting with the wind-bridge-control environment in a trial -and-error fashion (hence the explicit model of system dynamics is not required) using reinforcement learning algorithms of deep deterministic policy gradient (DDPG) due to its ability to tackle continuous actions with high training efficiency. As a proof of concept, numerical examples on active flutter control of a flat plate and a bridge deck are conducted to demonstrate the good performance of the proposed scheme.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available