Bayesian leave-one-out cross-validation for large data
Paper in proceeding, 2019

Model inference, such as model comparison, model checking, and model selection, is an important part of model development. Leave-one-out cross-validation (LOO-CV) is a general approach for assessing the generalizability of a model, but unfortunately, LOO-CV does not scale well to large datasets. We propose a combination of using approximate inference techniques and probability-proportional-to-size-sampling (PPS) for fast LOO-CV model evaluation for large data. We provide both theoretical and empirical results showing good properties for large data.

Author

Mans Magnusson

Aalto University

Michael Riis Andersen

Technical University of Denmark (DTU)

Aalto University

Johan Jonasson

Chalmers, Mathematical Sciences, Analysis and Probability Theory

University of Gothenburg

Aki Vehtari

Aalto University

36th International Conference on Machine Learning, ICML 2019

Vol. 2019-June 7505-7525

36th International Conference on Machine Learning, ICML 2019
Long Beach, USA,

Subject Categories

Applied Mechanics

Bioinformatics (Computational Biology)

Probability Theory and Statistics

More information

Latest update

2/12/2020