Robust Linear Quadratic Reinforcement Learning by Filtering
Paper i proceeding, 2025

This paper investigates the robustness of linear quadratic reinforcement learning when unmodeled dynamics are included, and how the performance can be improved by applying filtering. We examine both model-free and model-based approaches, and it is shown that the model-free approach greatly suffers from the inclusion of unmodeled dynamics when no filtering is used. With the inclusion of filtering, similar performance is, however, achieved by both approaches. It is also concluded that model-based reinforcement learning has some notable other advantages over the model-free approach, thus making model-based the preferable approach when unmodeled dynamics are present.

Reinforcement learning

robustness

filtering

optimal control

adaptive control

Författare

Ludvig Svedlund

Chalmers, Elektroteknik, System- och reglerteknik

Bengt Lennartson

Chalmers, Elektroteknik, System- och reglerteknik

IEEE International Conference on Automation Science and Engineering

21618070 (ISSN) 21618089 (eISSN)

2586-2593
9798331522469 (ISBN)

21st IEEE International Conference on Automation Science and Engineering, CASE 2025
Los Angeles, USA,

Ämneskategorier (SSIF 2025)

Sannolikhetsteori och statistik

Reglerteknik

DOI

10.1109/CASE58245.2025.11164101

Mer information

Senast uppdaterat

2025-10-17