State constrained control based on linearization of the Hamilton-Jacobi-Bellman equation
Paper in proceeding, 2010

For continuous time, state constrained, stochastic control problems a method based on optimization is presented. The method applies to systems where the control signal and the disturbance both enters affinely, and it has one main tuning paramater, which determines the control activity. If the disturbance covariance is unknown, it can also be used as a tuning parameter (matrix) to adjust the control directions in an intuitive way. Optimal control problems for this type of systems result in Hamilton Jacobi Bellman (HJB) equations that are problematic to solve because of nonlinearity and infinite boundary conditions. However, by applying a logarithmic transformation we show how and when the HJB equation can be transformed into a linear eigenvalue problem for which there are sometimes analytical solutions and if not, it can readily be solved with standard numerical methods. Sufficient and necessary conditions for when the method can be applied are derived, and their physical interpretation is discussed. A MIMO buffer control problem is used as an illustration.

Author

Torsten Wik

Chalmers, Signals and Systems, Systems and control

Per Rutquist

Tomlab Optimization AB

Claes Breitholtz

Chalmers, Signals and Systems, Systems and control

Proceedings of 49th IEEE Conference on Decision and Control. Atlanta, 15-17 December 2010

0191-2216 (ISSN)

5192-5197
978-1-4244-7744-9 (ISBN)

Subject Categories

Computational Mathematics

Roots

Basic sciences

DOI

10.1109/CDC.2010.5718012

ISBN

978-1-4244-7744-9

More information

Created

10/7/2017