4.4 Article

Robust Optimal Well Control using an Adaptive Multigrid Reinforcement Learning Framework

Journal

MATHEMATICAL GEOSCIENCES
Volume 55, Issue 3, Pages 345-375

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s11004-022-10033-x

Keywords

Reinforcement learning; Adaptive; Multigrid framework; Transfer learning; Robust optimal control

Ask authors/readers for more resources

Reinforcement learning is a promising tool for solving robust optimal well control problems. However, the reliance on a large number of simulations for learning robust control policies can be computationally intractable. To address this issue, an adaptive multigrid reinforcement learning framework is proposed, inspired by geometric multigrid methods. The framework starts with computationally efficient low-fidelity simulations and gradually increases the simulation fidelity, resulting in significant gains in computational efficiency. The effectiveness of the framework is demonstrated through case studies.
Reinforcement learning (RL) is a promising tool for solving robust optimal well control problems where the model parameters are highly uncertain and the system is partially observable in practice. However, the RL of robust control policies often relies on performing a large number of simulations. This could easily become computationally intractable for cases with computationally intensive simulations. To address this bottleneck, an adaptive multigrid RL framework is introduced which is inspired by principles of geometric multigrid methods used in iterative numerical algorithms. RL control policies are initially learned using computationally efficient low-fidelity simulations with coarse grid discretization of the underlying partial differential equations (PDEs). Subsequently, the simulation fidelity is increased in an adaptive manner towards the highest fidelity simulation that corresponds to the finest discretization of the model domain. The proposed framework is demonstrated using a state-of-the-art, model-free policy-based RL algorithm, namely the proximal policy optimization algorithm. Results are shown for two case studies of robust optimal well control problems, which are inspired from SPE-10 model 2 benchmark case studies. Prominent gains in computational efficiency are observed using the proposed framework, saving around 60-70% of the computational cost of its single fine-grid counterpart.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available