Methods for Stochastic Optimal Control under State Constraints
This thesis looks at a few different approaches to solving stochas-tic optimal control problems with state constraints. The motivatingproblem is optimal control of an energy buffer in a hybrid vehicle,although applications are abundant in a number of areas.Stochastic optimal control problems can be solved via the so-called Hamilton-Jacobi-Bellman (HJB) equation. State constraintsresult in boundary conditions for the HJB equation causing the valuefunction to go to infinity as the state approaches the boundary, whichmakes it difficult to solve this partial differential equation numerically.Different approaches to avoiding infinite values on the boundaryare investigated. First, we consider a logarithmic transformation ofthe value function. This results in an exact linearization, turningtheHJB equation into an eigenvalue problem in the one-dimensional case,and also in higher dimensions, but then with certain restrictions onthe relation between noise and control cost. Then, for a more generalproblem formulation, we introduce a different transform which yieldsa nonlinear problem. It is investigated under what conditions theboundary constraints will be well-behaved, and example problemsare solved using a collocation method, demonstrating how a smallnumber of collocation points is sufficient to yield good solutions inthose cases. Finally, we consider a method starting from the Fokker-Planck equation. This yields an equivalent problem, but where thevalue function of the HJB equation need not be computed explic-itly, but the probability density function of the closed-loop system iscomputed instead. This fact can be utilized to focus computationalresources on the parts of the state-space that are the most relevant.