On the infinite-time solution to state-constrained stochastic optimal control
Journal article, 2008

A method is presented for solving the infinite time Hamilton-Jacobi-Bellman (HJB) equation for certain state-constrained stochastic problems. The HJB equation is reformulated as an eigenvalue problem, such that the principal eigenvalue corresponds to the expected cost per unit time, and the corresponding eigenfunction gives the value function (up to an additive constant) for the optimal control policy. The eigenvalue problem is linear and hence there are fast numerical methods available for finding the solution.

Stochastic optimal control

Dynamic Programming

Hamilton-Jacobi-Bellman equation

Author

Per Rutquist

Chalmers, Signals and Systems, Systems and control

Claes Breitholtz

Chalmers, Signals and Systems, Systems and control

Torsten Wik

Chalmers, Signals and Systems, Systems and control

Automatica

0005-1098 (ISSN)

Vol. 44 7 1800-1805

Subject Categories

Computational Mathematics

DOI

10.1016/j.automatica.2007.10.018

More information

Created

10/7/2017