Engineering Trustworthy Self-Adaptive Autonomous Systems
Licentiatavhandling, 2018

Autonomous Systems (AS) are becoming ubiquitous in our society. Some examples are autonomous vehicles, unmanned aerial vehicles (UAV), autonomous trading systems, self-managing Telecom networks and smart factories. Autonomous Systems are based on a continuous interaction with the environment in which they are deployed, and more often than not this environment can be dynamic and partially unknown. AS must be able to take decisions autonomously at run-time also in presence of uncertainty. Software is the main enabler of AS and it allows the AS to self-adapt in response to changes in the environment and to evolve, via the deployment of new features.

Traditionally, software development techniques are based on a complete description at design time of how the system must behave in different environmental conditions. This is no longer effective since the system has to be able to explore and learn from the environment in which it is operating also after its deployment. Reinforcement learning (RL) algorithms discover policies that can lead AS to achieve their goals in a dynamic and unknown environment. The developer does not specify anymore how the system should act in each possible situation but rather the RL algorithm can achieve an optimal behaviour by trial and error. Once trained, the AS will be capable of taking decisions and performing actions autonomously while still learning from the environment. These systems are becoming increasingly powerful, yet this flexibility comes at a cost: the learned policy does not necessarily guarantee safety or the achievement of the goals.

This thesis explores the problem of building trustworthy autonomous systems from different angles. Firstly, we have identified the state of the art and challenges of building autonomous systems, with a particular focus on autonomous vehicles. Then, we have analysed how current approaches of formal verification can provide assurances in a System of Systems scenario. Finally, we have proposed methods that combine formal verification with reinforcement learning agents to address two major challenges: how to trust that an autonomous system will be able to achieve its goals and how to ensure that the behaviour of AS is safe.

Machine Learning

Monitoring and enforcement.

Automotive

System Trustworthiness

Autonomous Systems

Runtime verification

Formal Verification

Room 520, Jupiter Building
Opponent: Hans Hansson, Mälardalen University, Sweden

Författare

Piergiuseppe Mallozzi

Chalmers, Data- och informationsteknik, Software Engineering

P. Mallozzi, P. Pelliccione, A. Knauss, C. Berger, and N. Mohammadiha “Autonomous vehicles: state of art, future trends, and challenges” book chapter in Automotive Software Engineering: State of the Art and Future Trends, 2017, Springer

MoVEMo - A structured approach for engineering reward functions

Proceedings - 2nd IEEE International Conference on Robotic Computing, IRC 2018,;Vol. 2018(2018)p. 250-257

Paper i proceeding

Formal verification of the on-the-fly vehicle platooning protocol

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),;Vol. 9823(2016)p. 62-75

Paper i proceeding

Keeping intelligence under control

Proceedings of the 1st International Workshop on Software Engineering for Cognitive Services,;(2018)p. 37-40

Paper i proceeding

P. Mallozzi, E. Castellano, P. Pelliccione and G. Schneider “Using run-time monitoring to address safe exploration for reinforcement learning agents”

Ämneskategorier

Datavetenskap (datalogi)

Datorsystem

Utgivare

Chalmers

Room 520, Jupiter Building

Opponent: Hans Hansson, Mälardalen University, Sweden

Mer information

Senast uppdaterat

2018-11-14