Bayesian leave-one-out cross-validation for large data
Paper i proceeding, 2019

Model inference, such as model comparison, model checking, and model selection, is an important part of model development. Leave-one-out cross-validation (LOO-CV) is a general approach for assessing the generalizability of a model, but unfortunately, LOO-CV does not scale well to large datasets. We propose a combination of using approximate inference techniques and probability-proportional-to-size-sampling (PPS) for fast LOO-CV model evaluation for large data. We provide both theoretical and empirical results showing good properties for large data.

Författare

Mans Magnusson

Aalto-Yliopisto

Michael Riis Andersen

Danmarks Tekniske Universitet (DTU)

Aalto-Yliopisto

Johan Jonasson

Chalmers, Matematiska vetenskaper, Analys och sannolikhetsteori

Göteborgs universitet

Aki Vehtari

Aalto-Yliopisto

36th International Conference on Machine Learning, ICML 2019

Vol. 2019-June 7505-7525

36th International Conference on Machine Learning, ICML 2019
Long Beach, USA,

Ämneskategorier

Teknisk mekanik

Bioinformatik (beräkningsbiologi)

Sannolikhetsteori och statistik

Mer information

Senast uppdaterat

2020-02-12