Tactical Decision-Making in Autonomous Driving by Reinforcement Learning with Uncertainty Estimation
Paper i proceeding, 2020

Reinforcement learning (RL) can be used to create a tactical decision-making agent for autonomous driving. However, previous approaches only output decisions and do not provide information about the agent's confidence in the recommended actions. This paper investigates how a Bayesian RL technique, based on an ensemble of neural networks with additional randomized prior functions (RPF), can be used to estimate the uncertainty of decisions in autonomous driving. A method for classifying whether or not an action should be considered safe is also introduced. The performance of the ensemble RPF method is evaluated by training an agent on a highway driving scenario. It is shown that the trained agent can estimate the uncertainty of its decisions and indicate an unacceptable level when the agent faces a situation that is far from the training distribution. Furthermore, within the training distribution, the ensemble RPF agent outperforms a standard Deep Q-Network agent. In this study, the estimated uncertainty is used to choose safe actions in unknown situations. However, the uncertainty information could also be used to identify situations that should be added to the training process.

Författare

Carl-Johan E Hoel

Chalmers, Mekanik och maritima vetenskaper, Fordonsteknik och autonoma system

Volvo Group

Krister Wolff

Chalmers, Mekanik och maritima vetenskaper, Fordonsteknik och autonoma system

Leo Laine

Chalmers, Mekanik och maritima vetenskaper

Volvo Group

IEEE Intelligent Vehicles Symposium, Proceedings

1563-1569 9304614

31st IEEE Intelligent Vehicles Symposium, IV 2020
Virtual, Las Vegas, USA,

Ämneskategorier

Annan data- och informationsvetenskap

Sannolikhetsteori och statistik

Datavetenskap (datalogi)

DOI

10.1109/IV47402.2020.9304614

Mer information

Senast uppdaterat

2021-07-05