Automatiserad testning av gränser för kvalitet på AI/ML modeller (AQUAS)
Forskningsprojekt, 2021 – 2024

Software systems are increasingly deployed with Machine Learning (ML) models that autonomously make critical decisions in areas like medical diagnosis, self-driving cars, or fraud-detection, to name a few. This global trend has caught the attention of researchers within software (SW) quality and reliability, who have revealed robustness problems as well as harmful vulnerabilities in ML models and the systems that use them.
An inherent difficulty with ML models is to delimit their scope, i.e. identifying where and to what degree they can be trusted. While requirements engineering as well as testing efforts for conventional, programmed SW focus on describing what are the valid and invalid inputs and then go on to describe how to act on valid inputs, the training of ML models focuses primarily on the latter. While the boundaries between valid and invalid inputs are often clear-cut for conventional SW, i.e. sharp, they are typically unknown or at least understood only in a fuzzy manner for ML models. While ML researchers have proposed some techniques that can quantify model uncertainty they are not general-purpose and limit the form of the models. Here, we will instead address the problem of scope delimitation of ML models in general by leveraging and extending methods from automated testing of conventional SW. In particular, we will extend our techniques for automated boundary value analysis, exploration, and testing for the general-purpose boundary sharpening of ML models.

Deltagare

Robert Feldt (kontakt)

Chalmers, Data- och informationsteknik, Software Engineering

Felix Dobslaw

Chalmers, Data- och informationsteknik, Software Engineering

Finansiering

Vetenskapsrådet (VR)

Projekt-id: 2020-05272
Finansierar Chalmers deltagande under 2021–2024

Publikationer

Mer information

Senast uppdaterat

2021-07-22