Safety Proofs for Automated Driving using Formal Methods
Doctoral thesis, 2022

The introduction of driving automation in road vehicles can potentially reduce road traffic crashes and significantly improve road safety. Automation in road vehicles also brings other benefits such as the possibility to provide independent mobility for people who cannot and/or should not drive. Correctness of such automated driving systems (ADSs) is crucial as incorrect behaviour may have catastrophic consequences.

Automated vehicles operate in complex and dynamic environments, which requires decision-making and control at different levels. The aim of such decision-making is for the vehicle to be safe at all times. Verifying safety of these systems is crucial for the commercial deployment of full autonomy in vehicles. Testing for safety is expensive, impractical, and can never guarantee the absence of errors. In contrast, formal methods, techniques that use rigorous mathematical models to build hardware and software systems, can provide mathematical proofs of the correctness of the systems.

The focus of this thesis is to address some of the challenges in the safety verification of decision and control systems for automated driving. A central question here is how to establish formal methods as an efficient approach to develop a safe ADS. A key finding is the need for an integrated formal approach to prove correctness of ADS. Several formal methods to model, specify, and verify ADS are evaluated. Insights into how the evaluated methods differ in various aspects and the challenges in the respective methods are discussed. To help developers and safety experts design safe ADSs, the thesis presents modelling guidelines and methods to identify and address subtle modelling errors that might inadvertently result in proving a faulty design to be safe. To address challenges in the manual modelling process, a systematic approach to automatically obtain formal models from ADS software is presented and validated by a proof of concept. Finally, a structured approach on how to use the different formal artifacts to provide evidence for the safety argument of an ADS is shown.

safety argument

automata learning

supervisory control theory

Automated driving

theorem proving

formal methods

formal verification

model checking

Room HC1, Hörsalsvägen 14
Opponent: Professor André Platzer, Karlsruhe Institute of Technology (KIT) and Carnegie Mellon University (CMU)

Author

Yuvaraj Selvaraj

Chalmers, Electrical Engineering, Systems and control

Verification of Decision Making Software in an Autonomous Vehicle: An Industrial Case Study

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),;Vol. 11687(2019)p. 143-159

Paper in proceeding

Automatically Learning Formal Models from Autonomous Driving Software

Electronics (Switzerland),;Vol. 11(2022)

Journal article

Formal Development of Safe Automated Driving Using Differential Dynamic Logic

IEEE Transactions on Intelligent Vehicles,;Vol. 8(2023)p. 988-1000

Journal article

On How to Not Prove Faulty Controllers Safe in Differential Dynamic Logic

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),;Vol. 13478(2022)p. 281-297

Paper in proceeding

Jonas Krook, Yuvaraj Selvaraj, Wolfgang Ahrendt, Martin Fabian. "A Formal-Methods Approach to Provide Evidence in Automated-Driving Safety Cases"

How can one establish a claim to be true beyond reasonable doubt? If the claim is that the sum of two even numbers is even, then convincing anyone about the truth of that claim is not difficult. A fundamental reason is because the claim can be expressed as a precise mathematical statement for which an argument can be made in the form of a mathematical proof. Any dispute about the validity of the argument can always be unambiguously resolved.

Now, let us turn our attention to a more useful, or rather, impactful claim that automated vehicles will never cause a collision. Every attempt to provide a convincing argument about the truth of this claim is difficult, but also necessary. This thesis investigates how such claims about safety of automated vehicles can be expressed as mathematical statements and be proved to establish their truth. The investigation provides insights into how mathematical proofs can be used as evidence for the safety of automated vehicles, and also presents some crucial challenges in doing so.

Automatically Assessing Correctness of Autonomous Vehicles (Auto-CAV)

VINNOVA (2017-05519), 2018-03-01 -- 2021-12-31.

Areas of Advance

Transport

Subject Categories

Vehicle Engineering

Robotics

Control Engineering

Computer Systems

ISBN

978-91-7905-738-1

Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 5204

Publisher

Chalmers

Room HC1, Hörsalsvägen 14

Opponent: Professor André Platzer, Karlsruhe Institute of Technology (KIT) and Carnegie Mellon University (CMU)

More information

Latest update

10/26/2023