Inferential Induction: A Novel Framework for Bayesian Reinforcement Learning
Paper in proceeding, 2020

Bayesian Reinforcement Learning (BRL) offers a decision-theoretic solution to the reinforcement learning problem. While “model-based” BRL algorithms have focused either on maintaining a posterior distribution on models, BRL “model-free” methods try to estimate value function distributions but make strong implicit assumptions or approximations. We describe a novel Bayesian framework, \emph{inferential induction}, for correctly inferring value function distributions from data, which leads to a new family of BRL algorithms. We design an algorithm, Bayesian Backwards Induction (BBI), with this framework. We experimentally demonstrate that BBI is competitive with the state of the art. However, its advantage relative to existing BRL model-free methods is not as great as we have expected, particularly when the additional computational burden is taken into account.

Author

Emilio Jorge

Chalmers, Computer Science and Engineering (Chalmers), Data Science

Hannes Eriksson

Chalmers, Computer Science and Engineering (Chalmers), Data Science

Zenseact AB

Christos Dimitrakakis

Chalmers, Computer Science and Engineering (Chalmers), Data Science

University of Oslo

Debabrota Basu

Chalmers, Computer Science and Engineering (Chalmers), Data Science

Institut National de Recherche en Informatique et en Automatique (INRIA)

Divya Grover

Chalmers, Computer Science and Engineering (Chalmers), Data Science

Proceedings of Machine Learning Research

26403498 (eISSN)

Vol. 137 43-52

"I Can't Believe It's Not Better!" at NeurIPS Workshops
Virtual, ,

Infrastructure

C3SE (Chalmers Centre for Computational Science and Engineering)

Subject Categories

Bioinformatics (Computational Biology)

Probability Theory and Statistics

Computer Science

More information

Latest update

9/25/2023