On Provably Correct Decision-Making for Automated Driving
Licentiate thesis, 2020

The introduction of driving automation in road vehicles can potentially reduce road traffic crashes and significantly improve road safety. Automation in road vehicles also brings several other benefits such as the possibility to provide independent mobility for people who cannot and/or should not drive. Many different hardware and software components (e.g. sensing, decision-making, actuation, and control) interact to solve the autonomous driving task. Correctness of such automated driving systems is crucial as incorrect behaviour may have catastrophic consequences.

Autonomous vehicles operate in complex and dynamic environments, which requires decision-making and planning at different levels. The aim of such decision-making components in these systems is to make safe decisions at all times. The challenge of safety verification of these systems is crucial for the commercial deployment of full autonomy in vehicles. Testing for safety is expensive, impractical, and can never guarantee the absence of errors. In contrast, formal methods, which are techniques that use rigorous mathematical models to build hardware and software systems can provide a mathematical proof of the correctness of the system.


The focus of this thesis is to address some of the challenges in the safety verification of decision-making in automated driving systems. A central question here is how to establish formal verification as an efficient tool for automated driving software development.

A key finding is the need for an integrated formal approach to prove correctness and to provide a complete safety argument. This thesis provides insights into how three different formal verification approaches, namely supervisory control theory, model checking, and deductive verification differ in their application to automated driving and identifies the challenges associated with each method. It identifies the need for the introduction of more rigour in the requirement refinement process and presents one possible solution by using a formal model-based safety analysis approach. To address challenges in the manual modelling process, a possible solution by automatically learning formal models directly from code is proposed.

deductive verification

formal methods

supervisory control theory

formal verification

Automated driving

model checking

hybrid systems.

decision-making

Opponent: Cristina Seceleanu, Mälardalen University, Sweden

Author

Yuvaraj Selvaraj

Chalmers, Electrical Engineering, Systems and control

Verification of Decision Making Software in an Autonomous Vehicle: An Industrial Case Study

Lecture Notes in Computer Science,; Vol. 11687(2019)p. 143-159

Paper in proceeding

Supervisory Control Theory in System Safety Analysis

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),; Vol. 12235(2020)p. 9-22

Paper in proceeding

Automatically Learning Formal Models: An Industrial Case from Autonomous Driving Development

Proceedings of the ACM/IEEE Joint Conference on Digital Libraries,; (2020)

Paper in proceeding

Automatically Assessing Correctness of Autonomous Vehicles (Auto-CAV)

VINNOVA (2017-05519), 2018-03-01 -- 2021-12-31.

Subject Categories

Software Engineering

Embedded Systems

Robotics

Control Engineering

Computer Systems

Publisher

Chalmers

Opponent: Cristina Seceleanu, Mälardalen University, Sweden

More information

Latest update

11/26/2021