Infinite horizon average cost dynamic programming subject to ambiguity on conditional distribution
Paper i proceeding, 2016

This paper addresses the optimality of stochastic control strategies based on the infinite horizon average cost criterion, subject to total variation distance ambiguity on the conditional distribution of the controlled process. This stochastic optimal control problem is formulated using minimax theory, in which the minimization is over the control strategies and the maximization is over the conditional distributions. Under the assumption that, for every stationary Markov control law the maximizing conditional distribution of the controlled process is irreducible, we derive a new dynamic programming recursion which minimizes the future ambiguity, and we propose a new policy iteration algorithm. The new dynamic programming recursion includes, in addition to the standard terms, the oscillator semi-norm of the cost-to-go. The maximizing conditional distribution is found via a water-filling algorithm. The implications of our results are demonstrated through an example.

Dynamic programming

Optimal control

Aerospace electronics

Process control

Markov processes

Heuristic algorithms

Författare

I. Tzortzis

University of Cyprus

C.D. Charalambous

University of Cyprus

Themistoklis Charalambous

Chalmers, Signaler och system, Kommunikation, Antenner och Optiska Nätverk

Proceedings of the IEEE Conference on Decision and Control

07431546 (ISSN) 25762370 (eISSN)

Vol. 2016-February 7171-7176
978-1-4799-7886-1 (ISBN)

Ämneskategorier

Data- och informationsvetenskap

DOI

10.1109/CDC.2015.7403350

ISBN

978-1-4799-7886-1

Mer information

Senast uppdaterat

2023-08-08