INNER: information theory of deep neural networks
Research Project, 2019 – 2021

Over the last decade, deep-learning algorithms have dramatically improved the state of the art in many machine-learning problems, including computer vision, speech recognition, natural language processing, and audio recognition. Despite their success, however, there is no satisfactory mathematical theory that explains the functioning of such algorithms. Indeed, a common critique is that deep-learning algorithms are often used as black box, which is unsatisfactory in all applications for which performance guarantees are critical (e.g., traffic-safety applications).

The purpose of this project is to increase our theoretical understanding of deep neural networks. This will be done by relying on tools of information theory and focusing on specific tasks that are relevant to computer vision.

Participants

Giuseppe Durisi (contact)

Chalmers, Electrical Engineering, Communication, Antennas and Optical Networks

Fredrik Kahl

Imaging and Image Analysis

Collaborations

Chalmers AI Research Centre (CHAIR)

Gothenburg, Sweden

Funding

Chalmers AI Research Centre (CHAIR)

Funding Chalmers participation during 2019–2021

Related Areas of Advance and Infrastructure

Information and Communication Technology

Areas of Advance

Publications

More information

Latest update

9/23/2019