4.5 Article

An optimal control problem without control costs

Journal

MATHEMATICAL BIOSCIENCES AND ENGINEERING
Volume 20, Issue 3, Pages 5159-5168

Publisher

AMER INST MATHEMATICAL SCIENCES-AIMS
DOI: 10.3934/mbe.2023239

Keywords

stochastic optimal control; diffusion processes; first-passage time; dynamic programming; partial differential equation

Ask authors/readers for more resources

This article investigates a two-dimensional diffusion process and finds a control method to minimize the expected cost. It also obtains explicit solutions to the value function in specific cases and boundary conditions, using the method of similarity solutions for a non-linear second-order partial differential equation.
A two-dimensional diffusion process is controlled until it enters a given subset of R2. The aim is to find the control that minimizes the expected value of a cost function in which there are no control costs. The optimal control can be expressed in terms of the value function, which gives the smallest value that the expected cost can take. To obtain the value function, one can make use of dynamic programming to find the differential equation it satisfies. This differential equation is a non-linear second-order partial differential equation. We find explicit solutions to this non-linear equation, subject to the appropriate boundary conditions, in important particular cases. The method of similarity solutions is used.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available