State-Constrained Control Based on Linearization of the Hamilton-Jacobi-Bellman Equation
Övrigt konferensbidrag, 2010

For continuous time, state constrained, stochastic control problems a method based on optimization is presented. The method applies to systems where the control signal and the disturbance both enters affinely, and it has one main tuning paramater, which determines the control activity. If the disturbance covariance is unknown, it can also be used as a tuning parameter (matrix) to adjust the control directions in an intuitive way. Optimal control problems for this type of systems result in Hamilton Jacobi Bellman (HJB) equations that are problematic to solve because of nonlinearity and infinite boundary conditions. However, by applying a logarithmic transformation we show how and when the HJB equation can be transformed into a linear eigenvalue problem for which there are sometimes analytical solutions and if not, it can readily be solved with standard numerical methods. Sufficient and necessary conditions for when the method can be applied are derived, and their physical interpretation is discussed. A MIMO buffer control problem is used as an illustration.

HJB

Optimal control

Författare

Torsten Wik

Chalmers, Signaler och system, System- och reglerteknik

Per Rutquist

Chalmers, Signaler och system, System- och reglerteknik

Claes Breitholtz

Chalmers, Signaler och system, System- och reglerteknik

15th Nordic Process Control Workshop, Lund

Ämneskategorier

Beräkningsmatematik

Övrig annan teknik

Mer information

Skapat

2017-10-07