Bayesian Reinforcement Learning via Deep, Sparse Sampling
Paper i proceeding, 2020

We address the problem of Bayesian reinforcement learning using efficient model-based online planning. We propose an optimism-free Bayes-adaptive algorithm to induce deeper and sparser exploration with a theoretical bound on its performance relative to the Bayes optimal as well as lower computational complexity. The main novelty is the use of a candidate policy generator, to generate long-term options in the planning tree (over beliefs), which allows us to create much sparser and deeper trees. Experimental results on different environments show that in comparison to the state-of-the-art, our algorithm is both computationally more efficient, and obtains significantly higher reward over time in discrete environments.

Författare

Divya Grover

Chalmers, Data- och informationsteknik, Data Science

Debabrota Basu

Chalmers, Data- och informationsteknik, Data Science

Christos Dimitrakakis

Universitetet i Oslo

Chalmers, Data- och informationsteknik, Data Science

Proceedings of Machine Learning Research

26403498 (eISSN)

Vol. 108 3036-3045

International Conference on Artificial Intelligence and Statistics
online, USA,

Ämneskategorier

Annan data- och informationsvetenskap

Beräkningsmatematik

Sannolikhetsteori och statistik

Signalbehandling

Datavetenskap (datalogi)

Mer information

Senast uppdaterat

2023-07-04