Active Learning with Weak Supervision for Gaussian Processes
Paper in proceeding, 2023

Annotating data for supervised learning can be costly. When the annotation budget is limited, active learning can be used to select and annotate those observations that are likely to give the most gain in model performance. We propose an active learning algorithm that, in addition to selecting which observation to annotate, selects the precision of the annotation that is acquired. Assuming that annotations with low precision are cheaper to obtain, this allows the model to explore a larger part of the input space, with the same annotation budget. We build our acquisition function on the previously proposed BALD objective for Gaussian Processes, and empirically demonstrate the gains of being able to adjust the annotation precision in the active learning loop.

Weak supervision

Machine learning

Active learning

Author

Amanda Olmin

Linköping University

Jakob Lindqvist

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Lennart Svensson

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

F. Lindsten

Linköping University

Communications in Computer and Information Science

1865-0929 (ISSN) 18650937 (eISSN)

Vol. 1792 CCIS 195-204
9789819916412 (ISBN)

29th International Conference on Neural Information Processing, ICONIP 2022
Virtual, Online, ,

Subject Categories

Probability Theory and Statistics

Computer Science

DOI

10.1007/978-981-99-1642-9_17

More information

Latest update

7/14/2023