Designing Trustworthy Autonomous Systems
Doctoral thesis, 2021

The design of autonomous systems is challenging and ensuring their trustworthiness can have different meanings, such as i) ensuring consistency and completeness of the requirements by a correct elicitation and formalization process; ii) ensuring that requirements are correctly mapped to system implementations so that any system behaviors never violate its requirements; iii) maximizing the reuse of available components and subsystems in order to cope with the design complexity; and iv) ensuring correct coordination of the system with its environment.

Several techniques have been proposed over the years to cope with specific problems. However, a holistic design framework that, leveraging on existing tools and methodologies, practically helps the analysis and design of autonomous systems is still missing.

This thesis explores the problem of building trustworthy autonomous systems from different angles. We have analyzed how current approaches of formal verification can provide assurances: 1) to the requirement corpora itself by formalizing requirements with assume/guarantee contracts to detect incompleteness and conflicts; 2) to the reward function used to then train the system so that the requirements do not get misinterpreted; 3) to the execution of the system by run-time monitoring and enforcing certain invariants; 4) to the coordination of the system with other external entities in a system of system scenario and 5) to system behaviors by automatically synthesize a policy which is correct.

System Trustworthiness

Reactive Synthesis

Monitoring and enforcement

Assume-Guarantee Contracts

Runtime verification

Autonomous Systems

Formal Verification

Reinforcement Learning

Room 473, Jupiter Building
Opponent: Cristina Seceleanu, Mälardalen University, Sweden

Author

Piergiuseppe Mallozzi

Chalmers, Computer Science and Engineering (Chalmers), Software Engineering (Chalmers)

Formal verification of the on-the-fly vehicle platooning protocol

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),; Vol. 9823(2016)p. 62-75

Paper in proceeding

MoVEMo - A structured approach for engineering reward functions

Proceedings - 2nd IEEE International Conference on Robotic Computing, IRC 2018,; Vol. 2018(2018)p. 250-257

Paper in proceeding

A runtime monitoring framework to enforce invariants on reinforcement learning agents exploring complex environments

Proceedings - 2019 IEEE/ACM 2nd International Workshop on Robotics Software Engineering, RoSE 2019,; (2019)p. 5-12

Paper in proceeding

CROME: Contract-Based Robotic Mission Specification

2020 18th ACM-IEEE International Conference on Formal Methods and Models for System Design, MEMOCODE 2020,; (2020)

Paper in proceeding

Incremental Refinement of Goal Models with Contracts

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),; Vol. 12818 LNCS(2021)p. 35-50

Paper in proceeding

A Framework for Specifying and Realizing Correct–by–Construction Contextual Robotic Missions using Contracts


Autonomous vehicles, unmanned aerial vehicles (UAV), smart factories, and service robots (i.e. robots that perform useful tasks for humans) can be all considered Autonomous Systems. They are increasingly used in various domains, such as healthcare, logistics, telepresence, infrastructure maintenance, education, domestic tasks, and entertainment.
Autonomous Systems are becoming ubiquitous in our society and, in the near future, we will assist in systems exhibiting higher and higher levels of autonomy that will put new demands on the design and engineering of such systems.

Trustworthiness aims at establishing some degree of trust that the system is performing what it is supposed to do.
Designing trustworthy autonomous systems can be challenging for different reasons. For example, capturing and modeling system requirements, formulating a system specification, dealing with unknown environment assumptions, and ultimately producing a system that always satisfies its specification.

In this thesis, we explore the different challenges that can jeopardize the trustworthiness of the system and provide theoretical and practical solutions that use formal methods effectively in any aspect of the system design.

WASP SAS: Structuring data for continuous processing and ML systems

Wallenberg AI, Autonomous Systems and Software Program, 2018-01-01 -- 2023-01-01.

Areas of Advance

Information and Communication Technology

Subject Categories

Computer Science

ISBN

978-91-7905-506-6

Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 4973

Publisher

Chalmers

Room 473, Jupiter Building

Online

Opponent: Cristina Seceleanu, Mälardalen University, Sweden

More information

Latest update

6/11/2021