Monte-Carlo utility estimates for Bayesian reinforcement learning
Paper i proceeding, 2013

This paper introduces a set of algorithms for Monte-Carlo Bayesian reinforcement learning. Firstly, Monte-Carlo estimation of upper bounds on the Bayes-optimal value function is employed to construct an optimistic policy. Secondly, gradient-based algorithms for approximate upper and lower bounds are introduced. Finally, we introduce a new class of gradient algorithms for Bayesian Bellman error minimisation. We theoretically show that the gradient methods are sound. Experimentally, we demonstrate the superiority of the upper bound method in terms of reward obtained. However, we also show that the Bayesian Bellman error method is a close second, despite its significant computational simplicity.

Författare

Christos Dimitrakakis

Chalmers, Data- och informationsteknik, Datavetenskap

Proceedings of the IEEE Conference on Decision and Control

07431546 (ISSN) 25762370 (eISSN)

7303-7308 6761048
978-1-4673-5717-3 (ISBN)

Styrkeområden

Informations- och kommunikationsteknik

Ämneskategorier

Beräkningsmatematik

Sannolikhetsteori och statistik

Reglerteknik

Datavetenskap (datalogi)

DOI

10.1109/CDC.2013.6761048

ISBN

978-1-4673-5717-3

Mer information

Senast uppdaterat

2024-01-03