Robustness During Learning, Interaction and Adaptation for Autonomous Driving
Doctoral thesis, 2023

In a sequential decision-making process, it is imperative to consider the potential risks of taking incorrect decisions throughout the whole process as all wrongdoings may not be possible to be remedied. This is particularly important when there are potentially catastrophic consequences. In this work, we develop robust decision-making processes, doing appropriate risk assessments where needed, to be able to plan to avoid unacceptable consequences. In contrast to traditional techniques for decision-making under uncertainty that aim to maximise performance in expectation, we choose to value other aspects out of the distribution of outcomes. For instance, in an application such as autonomous driving, the chance of causing an accident might be small yet fatal. A risk-averse decision-maker may choose to modify the risk criterion to only include consider e.g. the 25% worst-case outcomes to design a more robust decision-making process. We propose frameworks for quantifying uncertainty under the reinforcement learning framework and develop robust algorithms and theory that allow for risk-sensitive decision-making under uncertainty. Further, we study the interactions between multiple agents in autonomous systems and ways to deploy decision-making processes to novel scenarios by adaptation.

Reinforcement Learning

Epistemic Uncertainty

Uncertainty Quantification

Machine Learning

Autonomous Driving

HC3, Hörsalsvägen 16 (Online password 354213)
Opponent: Aviv Tamar, Technion – Israel Institute of Technology, Israel

Author

Hannes Eriksson

Chalmers, Computer Science and Engineering (Chalmers), Data Science and AI

Epistemic risk-sensitive reinforcement learning

ESANN 2020 - Proceedings, 28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning,;(2020)p. 339-344

Paper in proceeding

Inferential Induction: A Novel Framework for Bayesian Reinforcement Learning

Proceedings of Machine Learning Research,;Vol. 137(2020)p. 43-52

Paper in proceeding

SENTINEL: Taming Uncertainty with Ensemble-based Distributional Reinforcement Learning

Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022,;Vol. 180(2022)p. 631-640

Paper in proceeding

Minimax-Bayes Reinforcement Learning

Proceedings of Machine Learning Research,;Vol. 206(2023)p. 7511-7527

Paper in proceeding

Reinforcement Learning in the Wild with Maximum Likelihood-based Model Transfer

Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS,;Vol. 2024(2024)p. 516-524

Paper in proceeding

Teaching Autonomous Vehicles How to Drive Safely.

Many of us have at some point learned how to drive a car. We can all reflect upon what in that process made it challenging for us. What we all remember is that it was a process of trial and error. Perhaps we first started out driving in a parking lot and over time we were able to experience more and more difficult scenarios. At all points in time, our driving instructor was there to make sure we could learn safely. If we were put into a situation we could not handle, then the job of the instructor was to intervene. As we become more and more proficient in driving the instructor could increase their trust in us. We wish to conduct this same feedback loop but for autonomous agents. Instead of a person learning how to drive we have an agent in the same situation. Here, we take the position of the instructor or designer of the agent. How can we replicate this safe learning process for this agent? After all, the agent has no grasp on what it does not know. By designing a more cautious agent we can limit its risk-taking behavior when it has the least amount of experience. Only when we know the agent will not take excessive risks during its learning process will we be able to deploy it in the real world. The agent needs to be wary of other road users and objects in the environment and not cause accidents.
In our work, we provide novel frameworks for the design of robust learning agents.

Areas of Advance

Information and Communication Technology

Subject Categories

Computer and Information Science

Computer Vision and Robotics (Autonomous Systems)

Infrastructure

C3SE (Chalmers Centre for Computational Science and Engineering)

ISBN

978-91-7905-904-0

Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 5370

Publisher

Chalmers

HC3, Hörsalsvägen 16 (Online password 354213)

Online

Opponent: Aviv Tamar, Technion – Israel Institute of Technology, Israel

More information

Latest update

8/8/2023 1