Abstract:
This thesis falls into five chapters. The main objective is to discuss and solve the problem of optimal control for systems described by both deterministic and stochastic ordinary differential equations. The principle of optimality for both kinds of equations is considered. In chapter 1 a complete formulation of the control problem is surveyed.
Chapters 2 and 3 investigate the well known methods in solving optimal control problems, namely, the maximum principle and dynamic programming. In both cases the variational principle is the origin for both methods. The variational principle in its origin goes back to the Hamiltonian principle in analytic mechanics. As a fact, in terms of the variational principle one can give the Hamiltonian function or the Hamilton Jacobi-Bellman equation (HJB). A relation between the Hamiltonian and HJB can be found in [10]. Applications to the mentioned methods are also given in chapters 2, 3.
Chapter 4 discusses the control problem in the stochastic case. Both methods for solving the stochastic optimal control problem, namely, stochastic maximum principle and dynamic programming are surveyed. Chapter 5 plays the essential part for applications, where the maximum principle and dynamic programming methods are applied to solve some problems.