Investigating Execution Trace Embedding for Test Case Prioritization
Paper in proceeding, 2023
Most automated software testing tasks, such as test case generation, selection, and prioritization, can benefit from an abstract representation of test cases. Although test case representation usually is not explicitly discussed in software literature, but traditionally test cases are mostly represented based on what they cover in source code (e.g., which statements or branches), which is in fact an abstract representation. In this paper, we hypothesize that execution traces of test cases, as representations of their behaviour, can be leveraged to better encode test cases compared to code-based coverage information, for automated testing tasks. To validate this hypothesis, we propose an embedding approach, Test2Vec, based on an state-of-the-art neural program embedding (CodeBert), where the encoder maps test execution traces, i.e. sequences of method calls with their inputs and return values, to fixed-length, numerical vectors. We evaluate this representation in automated test case prioritization (TP) task. Our TP method is a classifier trained on the passing and failing vectors of historical test cases, in regression testing. We compare our embedding with multiple baselines and related work including CodeBert itself. The empirical study is based on 250 real faults and 703,353 seeded faults (mutants) over 250 revisions of 10 open-source Java projects from Defects4J, with a total of over 1,407,206 execution traces. Results show that our approach improves all alternatives, significantly, with respect to studied metrics. We also show that both inputs and outputs of a method are important elements of the execution-based embedding.
Embedding
Transformers
Software Testing
Test Case Prioritization