Keeping intelligence under control
Paper in proceeding, 2018
Machine learning techniques allow creating systems that learn how to execute a set of actions to achieve a desired goal. When a change occurs, machine learning techniques allow the system to autonomously learn new policies and strategies for actions execution. This flexibility comes at a cost: the developer has no longer full control on the system behaviour. Thus, there is no way to guarantee that the system will not violate important properties, such as safety-critical properties.
To overcome this issue, we believe that machine learning techniques should be combined with suitable reasoning mechanisms aimed at assuring that the decisions taken by the machine learning algorithm do not violate safety-critical requirements. This paper proposes an approach that combines machine learning with run-time monitoring to detect violations of system invariants in the actions execution policies.
Machine learning
Safety-critical
Autonomous systems
Runtime verification
Reinforcement learning
Author
Piergiuseppe Mallozzi
Chalmers, Computer Science and Engineering (Chalmers), Software Engineering (Chalmers)
Patrizio Pelliccione
University of Gothenburg
Claudio Menghi
University of Gothenburg
Proceedings - International Conference on Software Engineering
02705257 (ISSN)
37-40978-1-4503-5740-1 (ISBN)
Göteborg, Sweden,
Areas of Advance
Information and Communication Technology
Subject Categories
Embedded Systems
Computer Science
Computer Systems
DOI
10.1145/3195555.3195558
ISBN
9781450357401