Reinforcement Learning in the Wild with Maximum Likelihood-based Model Transfer
Paper i proceeding, 2024

In this paper, we study the problem of transferring the available Markov Decision Process (MDP) models to learn and plan efficiently in an unknown but similar MDP. We refer to it as Model Transfer Reinforcement Learning (MTRL) problem. First, we formulate MTRL for discrete MDPs and Linear Quadratic Regulators (LQRs) with continuous state actions. Then, we propose a generic two-stage algorithm, MLEMTRL, to address the MTRL problem in discrete and continuous settings. In the first stage, MLEMTRL uses a constrained Maximum Likelihood Estimation (MLE)-based approach to estimate the target MDP model using a set of known MDP models. In the second stage, using the estimated target MDP model, MLEMTRL deploys a model-based planning algorithm appropriate for the MDP class. Theoretically, we prove worst-case regret bounds for MLEMTRL both in realisable and non-realisable settings. We empirically demonstrate that MLEMTRL allows faster learning in new MDPs than learning from scratch and achieves near-optimal performance depending on the similarity of the available MDPs and the target MDP.

Linear Quadratic Regulator

Maximum Likelihood Estimation

Transfer Learning

Reinforcement Learning

Författare

Hannes Eriksson

Chalmers, Data- och informationsteknik, Data Science och AI

Tommy Tram

Chalmers, Elektroteknik, System- och reglerteknik

Debabrota Basu

Chalmers, Data- och informationsteknik, Data Science

Mina Alibeigi

Zenseact AB

Christos Dimitrakakis

Chalmers, Data- och informationsteknik, Data Science och AI

Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS

15488403 (ISSN) 15582914 (eISSN)

Vol. 2024 516-524
979-8-4007-0486-4 (ISBN)

23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2024
Auckland, New Zealand,

Styrkeområden

Informations- och kommunikationsteknik

Ämneskategorier

Sannolikhetsteori och statistik

Reglerteknik

Annan elektroteknik och elektronik

DOI

10.48550/arXiv.2302.09273

Mer information

Senast uppdaterat

2024-06-28