Automated boundary testing for QUality of Ai/ml modelS (AQUAS)
Research Project, 2021 – 2024

Software systems are increasingly deployed with Machine Learning (ML) models that autonomously make critical decisions in areas like medical diagnosis, self-driving cars, or fraud-detection, to name a few. This global trend has caught the attention of researchers within software (SW) quality and reliability, who have revealed robustness problems as well as harmful vulnerabilities in ML models and the systems that use them.
An inherent difficulty with ML models is to delimit their scope, i.e. identifying where and to what degree they can be trusted. While requirements engineering as well as testing efforts for conventional, programmed SW focus on describing what are the valid and invalid inputs and then go on to describe how to act on valid inputs, the training of ML models focuses primarily on the latter. While the boundaries between valid and invalid inputs are often clear-cut for conventional SW, i.e. sharp, they are typically unknown or at least understood only in a fuzzy manner for ML models. While ML researchers have proposed some techniques that can quantify model uncertainty they are not general-purpose and limit the form of the models. Here, we will instead address the problem of scope delimitation of ML models in general by leveraging and extending methods from automated testing of conventional SW. In particular, we will extend our techniques for automated boundary value analysis, exploration, and testing for the general-purpose boundary sharpening of ML models.

Participants

Robert Feldt (contact)

Chalmers, Computer Science and Engineering (Chalmers), Software Engineering (Chalmers)

Felix Dobslaw

Chalmers, Computer Science and Engineering (Chalmers), Software Engineering (Chalmers)

Funding

Swedish Research Council (VR)

Project ID: 2020-05272
Funding Chalmers participation during 2021–2024

Publications

More information

Latest update

2021-07-22