4.7 Article

Stochastic optimal well control in subsurface reservoirs using reinforcement learning

Journal

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.engappai.2022.105106

Keywords

Reinforcement learning; Stochastic optimal control; Subsurface flow control; Artificial intelligence in reservoir management; Optimal control for partially observable system

Funding

  1. Ali Danesh scholarship [EP/V048899/1]
  2. EPSRC, UK

Ask authors/readers for more resources

This study introduces a model-free reinforcement learning framework to address the robust optimal well control problem, utilizing deep RL algorithms to learn optimal action policies based on saturation, pressure values, and valve openings. By introducing a domain randomization scheme to handle model parameter uncertainties, numerical results are presented on two subsurface flow test cases with distinct uncertainty distributions, demonstrating the robustness of the proposed RL approach when applied to unseen samples.
We present a case study of model-free reinforcement learning (RL) framework to solve stochastic optimal control for a predefined parameter uncertainty distribution and partially observable system. We focus on robust optimal well control problem which is a subject of intensive research activities in the field of subsurface reservoir management. For this problem, the system is partially observed since the data is only available at well locations. Furthermore, the model parameters are highly uncertain due to sparsity of available field data. In principle, RL algorithms are capable of learning optimal action policies - a map from states to actions - to maximize a numerical reward signal. In deep RL, this mapping from state to action is parameterized using a deep neural network. In the RL formulation of the robust optimal well control problem, the states are represented by saturation and pressure values at well locations while the actions represent the valve openings controlling the flow through wells. The numerical reward refers to the total sweep efficiency and the uncertain model parameter is the subsurface permeability field. The model parameter uncertainties are handled by introducing a domain randomization scheme that exploits cluster analysis on its uncertainty distribution. We present numerical results using two state-of-the-art RL algorithms, proximal policy optimization (PPO) and advantage actor-critic (A2C), on two subsurface flow test cases representing two distinct uncertainty distributions of permeability field. The results were benchmarked against optimization results obtained using differential evolution algorithm. Furthermore, we demonstrate the robustness of the proposed use of RL by evaluating the learned control policy on unseen samples drawn from the parameter uncertainty distribution that were not used during the training process.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available