Active Learning with Weak Supervision for Gaussian Processes
Paper i proceeding, 2023

Annotating data for supervised learning can be costly. When the annotation budget is limited, active learning can be used to select and annotate those observations that are likely to give the most gain in model performance. We propose an active learning algorithm that, in addition to selecting which observation to annotate, selects the precision of the annotation that is acquired. Assuming that annotations with low precision are cheaper to obtain, this allows the model to explore a larger part of the input space, with the same annotation budget. We build our acquisition function on the previously proposed BALD objective for Gaussian Processes, and empirically demonstrate the gains of being able to adjust the annotation precision in the active learning loop.

Weak supervision

Machine learning

Active learning

Författare

Amanda Olmin

Linköpings universitet

Jakob Lindqvist

Chalmers, Elektroteknik, Signalbehandling och medicinsk teknik

Lennart Svensson

Chalmers, Elektroteknik, Signalbehandling och medicinsk teknik

F. Lindsten

Linköpings universitet

Communications in Computer and Information Science

1865-0929 (ISSN) 18650937 (eISSN)

Vol. 1792 CCIS 195-204
9789819916412 (ISBN)

29th International Conference on Neural Information Processing, ICONIP 2022
Virtual, Online, ,

Ämneskategorier

Sannolikhetsteori och statistik

Datavetenskap (datalogi)

DOI

10.1007/978-981-99-1642-9_17

Mer information

Senast uppdaterat

2023-07-14