Bayesian Reinforcement Learning via Deep, Sparse Sampling
Paper in proceeding, 2020

We address the problem of Bayesian reinforcement learning using efficient model-based online planning. We propose an optimism-free Bayes-adaptive algorithm to induce deeper and sparser exploration with a theoretical bound on its performance relative to the Bayes optimal as well as lower computational complexity. The main novelty is the use of a candidate policy generator, to generate long-term options in the planning tree (over beliefs), which allows us to create much sparser and deeper trees. Experimental results on different environments show that in comparison to the state-of-the-art, our algorithm is both computationally more efficient, and obtains significantly higher reward over time in discrete environments.

Author

Divya Grover

Chalmers, Computer Science and Engineering (Chalmers), Data Science

Debabrota Basu

Chalmers, Computer Science and Engineering (Chalmers), Data Science

Christos Dimitrakakis

University of Oslo

Chalmers, Computer Science and Engineering (Chalmers), Data Science

Proceedings of Machine Learning Research

26403498 (eISSN)

Vol. 108 3036-3045

International Conference on Artificial Intelligence and Statistics
online, USA,

Subject Categories

Other Computer and Information Science

Computational Mathematics

Probability Theory and Statistics

Signal Processing

Computer Science

More information

Latest update

7/4/2023 9