The possibility to control a stochastic process is a very interesting and stimulating research subject,
and have potentially many applications in science, engineering and finance.
In the optimal control theory the state of a system is controlled
by minimizing (maximizing) an objective function of the state. Mathematically it is a constrained
minimization problem.
For stochastic models, in the current scientific literature, the problem is formulated with
an average of the cost functional of the stochastic state. The solution of this optimal control problem
can be found by solving an Hamilton-Jacobi-Bellman equation.
In this research we propose to use the PDF as representative of the state of the system,
define the objective as a functional of the PDF, and use the Fokker-Planck equation
as constraint of the optimization problem. This is a new and unexplored framework in the field
of optimization.
The solution can be found by formulating the minimization problem as an optimality system of PDEs,
in order to find the reduced gradient of the objective and its vanishing point.
The aim of this research is to develop numerical methods to solve the optimality system and
special minimization techniques for this non-linear optimization problem.