A unified active learning framework for annotating graph data for regression task
Journal article, 2024

In many domains, effectively applying machine learning models requires a large number of annotations and labelled data, which might not be available in advance. Acquiring annotations often requires significant time, effort, and computational resources, making it challenging. Active learning strategies are pivotal in addressing these challenges, particularly for diverse data types such as graphs. Although active learning has been extensively explored for node-level classification, its application to graph-level learning, especially for regression tasks, is not well-explored. We develop a unified active learning framework specializing in graph annotating and graph-level learning for regression tasks on both standard and expanded graphs, which are more detailed representations. We begin with graph collection and construction. Then, we construct various graph embeddings (unsupervised and supervised) into a latent space. Given such an embedding, the framework becomes task agnostic and active learning can be performed using any regression method and query strategy suited for regression. Within this framework, we investigate the impact of using different levels of information for active and passive learning, e.g., partially available labels and unlabelled test data. Despite our framework being domain agnostic, we validate it on a real-world application of software performance prediction, where the execution time of the source code is predicted. Thus, the graph is constructed as an intermediate source code representation. We support our methodology with a real-world dataset to underscore the applicability of our approach. Our real-world experiments reveal that satisfactory performance can be achieved by querying labels for only a small subset of all the data. A key finding is that Graph2Vec (an unsupervised embedding approach for graph data) performs the best, but only when all train and test features are used. However, Graph Neural Networks (GNNs) are the most flexible embedding techniques when used for different levels of information with and without label access. In addition, we find that the benefit of active learning increases for larger datasets (more graphs) and when the graphs are more complex, which is arguably when active learning is the most important.

Graph neural networks (GNNs)

Active learning

Graphs-level regression

Author

Hazem Samoaa

Software Engineering 2

Linus Aronsson

Chalmers, Computer Science and Engineering (Chalmers), Data Science and AI

Antonio Longa

University of Trento

Philipp Leitner

Software Engineering 2

Morteza Haghir Chehreghani

Chalmers, Computer Science and Engineering (Chalmers), Data Science and AI

Engineering Applications of Artificial Intelligence

0952-1976 (ISSN)

Vol. 138 109383

Subject Categories

Computer Science

DOI

10.1016/j.engappai.2024.109383

More information

Latest update

10/15/2024