Combining machine-learning with invariants assurance techniques for autonomous systems
Paper in proceeding, 2017
Autonomous Systems are systems situated in some environment and are able of taking decision autonomously. The environment is not precisely known at design-time and it might be full of unforeseeable events that the autonomous system has to deal with at run-time. This brings two main problems to be addressed. One is that the uncertainty of the environment makes it difficult to model all the behaviours that the autonomous system might have at the design-time. A second problem is that, especially for safety-critical systems, maintaining the safety requirements is fundamental despite the system's adaptations. We address such problems by shifting some of the assurance tasks at run-time. We propose a method for delegating part of the decision making to agent-based algorithms using machine learning techniques. We then monitor at run-time that the decisions do not violate the autonomous system's safety-critical requirements and by doing so we also send feedback to the decision-making process so that it can learn. We plan to implement this approach using reinforcement learning for decision making and predictive monitoring for checking at run-time the preservation and/or violation of invariant properties of the system. We also plan to validate it using ROS as software middleware and miniaturized vehicles and real vehicles as hardware.
Machine-learning
Autonomous systems
Invariants enforcement