Keeping intelligence under control
Paper i proceeding, 2018
Machine learning techniques allow creating systems that learn how to execute a set of actions to achieve a desired goal. When a change occurs, machine learning techniques allow the system to autonomously learn new policies and strategies for actions execution. This flexibility comes at a cost: the developer has no longer full control on the system behaviour. Thus, there is no way to guarantee that the system will not violate important properties, such as safety-critical properties.
To overcome this issue, we believe that machine learning techniques should be combined with suitable reasoning mechanisms aimed at assuring that the decisions taken by the machine learning algorithm do not violate safety-critical requirements. This paper proposes an approach that combines machine learning with run-time monitoring to detect violations of system invariants in the actions execution policies.
Machine learning
Safety-critical
Autonomous systems
Runtime verification
Reinforcement learning
Författare
Piergiuseppe Mallozzi
Chalmers, Data- och informationsteknik, Software Engineering
Patrizio Pelliccione
Göteborgs universitet
Claudio Menghi
Göteborgs universitet
Proceedings - International Conference on Software Engineering
02705257 (ISSN)
37-40978-1-4503-5740-1 (ISBN)
Göteborg, Sweden,
Styrkeområden
Informations- och kommunikationsteknik
Ämneskategorier
Inbäddad systemteknik
Datavetenskap (datalogi)
Datorsystem
DOI
10.1145/3195555.3195558
ISBN
9781450357401