A general framework for ensemble distribution distillation
Paper i proceeding, 2020

Ensembles of neural networks have shown to give better predictive performance and more reliable uncertainty estimates than individual networks. Additionally, ensembles allow the uncertainty to be decomposed into aleatoric (data) and epistemic (model) components, giving a more complete picture of the predictive uncertainty. Ensemble distillation is the process of compressing an ensemble into a single model, often resulting in a leaner model that still outperforms the individual ensemble members. Unfortunately, standard distillation erases the natural uncertainty decomposition of the ensemble. We present a general framework for distilling both regression and classification ensembles in a way that preserves the decomposition. We demonstrate the desired behaviour of our framework and show that its predictive performance is on par with standard distillation.

Ensemble

Distillation

Uncertainty

Författare

Jakob Lindqvist

Chalmers, Elektroteknik, Signalbehandling och medicinsk teknik

Amanda Olmin

Linköpings universitet

F. Lindsten

Linköpings universitet

Lennart Svensson

Chalmers, Elektroteknik, Signalbehandling och medicinsk teknik

IEEE International Workshop on Machine Learning for Signal Processing, MLSP

21610363 (ISSN) 21610371 (eISSN)

Vol. 2020-September 9231703

30th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2020
Virtual, Espoo, Finland,

Probabilistic models and deep learning - bridging the gap

Wallenberg AI, Autonomous Systems and Software Program, -- .

Ämneskategorier

Bioinformatik (beräkningsbiologi)

Sannolikhetsteori och statistik

Datorsystem

DOI

10.1109/MLSP49062.2020.9231703

Mer information

Senast uppdaterat

2021-08-19