Verification of Machine Learning Algorithms (Vermillion)
Forskningsprojekt, 2019 – 2021

It has long been known that our ability to develop and deploy machine learning (ML) algorithms outpaces our ability to make clear guarantees about their behaviour. Szegedy et al, in 2013, revealed how neural networks are susceptible to adversarial examples [SZS13]. Later, Goodfellow (an author of that paper) coauthored a series of blog posts  beginning with “Breaking things is easy” [PG16], which points out that we know much more about attacking machine learning models than about defending them. In “The challenge of verification and testing of machine learning” [PG17], the authors conclude that “The verification of machine learning models is still in its infancy, because methods make assumptions that prevent them from providing absolute guarantees of the absence of adversarial examples.” 


This situation is unacceptable, as ML algorithms will increasingly be deployed in safety critical systems, for example in autonomous vehicles. In addition, machine learning will increasingly be used to develop the software that is part of the infrastructure on which we rely (for communication, shopping, banking etc.).


This project aims to develop new methods of testing and verifying machine learning algorithms and to kickstart our group’s application of its expertise in testing and formal verification in the area of  AI/ML.


There is promising work on which we can build. For example, Bastani et al showed how to formulate, efficiently estimate and improve the robustness of neural nets, using an encoding of the robustness property as a constraint system [BIL16]. (One of the coauthors of this paper, Dimitrios Vytiniotis, now at DeepMind, has agreed to act as an external advisor to the project.) Katz et al have taken promising first steps towards formally verifying properties of neural networks, at scale, without having to make any simplifying assumptions [KBD17]. However, much remains to be done to enable both testing and verification of the safety of machine learning algorithms. A recent survey done by RiSE Viktoria of the state of the art of testing of self-driving vehicles confirms that this project is timely, and targets the early stage of a vital research field.

The  long term goal is to develop methods to design neural networks and machine learning based software with guaranteed properties.

Deltagare

Mary Sheeran (kontakt)

Chalmers, Data- och informationsteknik, Funktionell programmering

Yinan Yu

Chalmers, Data- och informationsteknik, Funktionell programmering

Finansiering

Chalmers AI-forskningscentrum (CHAIR)

Finansierar Chalmers deltagande under 2019–2021

Mer information

Senast uppdaterat

2020-02-04