Journeys in vector space: Using deep neural network representations to aid automotive software engineering
Doktorsavhandling, 2023
Scope - This work focuses upon two automotive software engineering tasks, (1) assessing whether embedded software complies with specified design guidelines, and (2) generating realistic stimuli to test embedded software in virtual rigs.
Contributions - First, as the main tool for solving the design compliance task, we train tasnet, a language model of automotive software. Then, we introduce DECO, a rule-based algorithm which assesses the compliance of query programs with the Controller-Handler automotive software design pattern. Utilizing the property of semantic regularity in language models, DECO conducts this assessment by comparing the geometric alignment between query and benchmark programs in tasnet's representation space. Second, focusing upon stimulus generation, we train logan, a deep generative model of in-vehicle behavior. We then introduce MLERP, a rule-based algorithm which takes user-specified test conditions and samples logan to generate realistic test stimuli which adhere to the conditions. Using the property of interpolation in representation space for semantic combination, MLERP generates novel stimuli within the boundaries of specification. Third, staying with the testing use case, we improve logan to train silgan, which simplifies the specification of test conditions. Then, noting that sampling a generative model is less efficient, we introduce GRADES, a rule-based algorithm that uses a specially constructed objective to search for stimuli. GRADES is built upon the fact that neural networks in silgan are differentiable, and, given an appropriate objective, a gradient descent-based search in model representation space efficiently yields suitable stimuli. Fourth, we note that our recipe for solving automotive software engineering tasks consistently pairs a self-supervised foundation model with a rule-based algorithm operating in the model's representation space. This paradigm for building predictive models, which we refer to as 'pre-train and calculate', not only extracts nuanced predictions without any supervision, but is also relatively transparent. Fifth, with our predictive approach relying heavily upon properties in abstract representation space, we develop techniques that explain and characterize selected high-dimensional vector spaces. Overall, by taking a data-driven deep learning approach, techniques we introduce reduce manual effort in undertaking two crucial engineering tasks. This has a direct effect on improving the cadence of automotive software engineering without compromising the quality of delivery.
automotive software design and testing
generative adversarial networks
latent space arithmetic
generative AI
explainable AI
large language models
Författare
Dhasarathy Parthasarathy
Chalmers, Data- och informationsteknik, Funktionell programmering
Measuring design compliance using neural language models: An automotive case study
PROMISE 2022 - Proceedings of the 18th International Conference on Predictive Models and Data Analytics in Software Engineering, co-located with ESEC/FSE 2022,;(2022)p. 12-21
Paper i proceeding
SilGAN: Generating driving maneuvers for scenario-based software-in-The-loop testing
Proceedings - 3rd IEEE International Conference on Artificial Intelligence Testing, AITest 2021,;(2021)p. 65-72
Paper i proceeding
Controlled time series generation for automotive software-in-the-loop testing using GANs
Proceedings - 2020 IEEE International Conference on Artificial Intelligence Testing, AITest 2020,;(2020)p. 39-46
Paper i proceeding
Does the dataset meet your expectations? Explaining sample representation in image data
Belgian/Netherlands Artificial Intelligence Conference,;(2020)p. 194-208
Paper i proceeding
Ämneskategorier
Annan maskinteknik
Programvaruteknik
Inbäddad systemteknik
Datorsystem
ISBN
978-91-7905-945-3
Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 5411
Utgivare
Chalmers
Analysen, EDIT, Rännvägen 6B, Gothenburg, Sweden. For Zoom, use password 003863
Opponent: Professor Earl T. Barr, University College London, London, United Kingdom