Optimal sampling in unbiased active learning
Paper i proceeding, 2020

A common belief in unbiased active learning is that, in order to capture the most informative instances, the sampling probabilities should be proportional to the uncertainty of the class labels. We argue that this produces suboptimal predictions and present sampling schemes for unbiased pool-based active learning that minimise the actual prediction error, and demonstrate a better predictive performance than competing methods on a number of benchmark datasets. In contrast, both probabilistic and deterministic uncertainty sampling performed worse than simple random sampling on some of the datasets.

Optimal design

Weighted loss

Sampling weights

Generalised linear models

Unequal probability sampling

Active learning

Författare

Henrik Imberg

Chalmers, Matematiska vetenskaper, Tillämpad matematik och statistik

Johan Jonasson

Chalmers, Matematiska vetenskaper, Analys och sannolikhetsteori

Marina Axelson-Fisk

Chalmers, Matematiska vetenskaper, Tillämpad matematik och statistik

Proceedings of Machine Learning Research

26403498 (eISSN)

Vol. 108 559-569

23rd International Conference on Artificial Intelligence and Statistics (AISTATS)
Online, ,

Statistical sampling in machine learning

Stiftelsen Wilhelm och Martina Lundgrens Vetenskapsfond (2020-3446), 2020-05-01 -- 2020-12-31.

Stiftelsen Wilhelm och Martina Lundgrens Vetenskapsfond (2019-3132), 2019-05-01 -- 2019-12-31.

Ämneskategorier

Sannolikhetsteori och statistik

Mer information

Senast uppdaterat

2023-07-06