INNER: information theory of deep neural networks
Forskningsprojekt, 2019 – 2021

Over the last decade, deep-learning algorithms have dramatically improved the state of the art in many machine-learning problems, including computer vision, speech recognition, natural language processing, and audio recognition. Despite their success, however, there is no satisfactory mathematical theory that explains the functioning of such algorithms. Indeed, a common critique is that deep-learning algorithms are often used as black box, which is unsatisfactory in all applications for which performance guarantees are critical (e.g., traffic-safety applications).

The purpose of this project is to increase our theoretical understanding of deep neural networks. This will be done by relying on tools of information theory and focusing on specific tasks that are relevant to computer vision.

Deltagare

Giuseppe Durisi (kontakt)

Chalmers, Elektroteknik, Kommunikation, Antenner och Optiska Nätverk

Fredrik Kahl

Digitala bildsystem och bildanalys

Samarbetspartners

Chalmers AI-forskningscentrum (CHAIR)

Gothenburg, Sweden

Finansiering

Chalmers AI-forskningscentrum (CHAIR)

Finansierar Chalmers deltagande under 2019–2021

Relaterade styrkeområden och infrastruktur

Informations- och kommunikationsteknik

Styrkeområden

Publikationer

Mer information

Senast uppdaterat

2019-09-23