Engineering Trustworthy Self-Adaptive Autonomous Systems
Licentiate thesis, 2018
Autonomous Systems (AS) are becoming ubiquitous in our society. Some examples are autonomous vehicles, unmanned aerial vehicles (UAV), autonomous trading systems, self-managing Telecom networks and smart factories. Autonomous Systems are based on a continuous interaction with the environment in which they are deployed, and more often than not this environment can be dynamic and partially unknown. AS must be able to take decisions autonomously at run-time also in presence of uncertainty. Software is the main enabler of AS and it allows the AS to self-adapt in response to changes in the environment and to evolve, via the deployment of new features.
Traditionally, software development techniques are based on a complete description at design time of how the system must behave in diﬀerent environmental conditions. This is no longer eﬀective since the system has to be able to explore and learn from the environment in which it is operating also after its deployment. Reinforcement learning (RL) algorithms discover policies that can lead AS to achieve their goals in a dynamic and unknown environment. The developer does not specify anymore how the system should act in each possible situation but rather the RL algorithm can achieve an optimal behaviour by trial and error. Once trained, the AS will be capable of taking decisions and performing actions autonomously while still learning from the environment. These systems are becoming increasingly powerful, yet this ﬂexibility comes at a cost: the learned policy does not necessarily guarantee safety or the achievement of the goals.
This thesis explores the problem of building trustworthy autonomous systems from diﬀerent angles. Firstly, we have identiﬁed the state of the art and challenges of building autonomous systems, with a particular focus on autonomous vehicles. Then, we have analysed how current approaches of formal veriﬁcation can provide assurances in a System of Systems scenario. Finally, we have proposed methods that combine formal veriﬁcation with reinforcement learning agents to address two major challenges: how to trust that an autonomous system will be able to achieve its goals and how to ensure that the behaviour of AS is safe.
Monitoring and enforcement.