Towards ML-Integration and Training Patterns for AI-Enabled Systems
Paper i proceeding, 2025
Machine learning (ML) has improved dramatically over the last decade. ML models have become a fundamental part of intelligent software systems, many of which are safety-critical. Since ML models have complex lifecycles, they require dedicated methods and tools, such as pipeline automation or experiment management. Unfortunately, the current state of the art is model-centric, disregarding the challenges of engineering systems with multiple ML models that need to interact to realize complex functionality. Consider, for instance, robotics or autonomous driving systems, where perception architectures can easily incorporate more than 30 ML models. Developing such multi-ML model systems requires architectures that can integrate and chain ML components. Maintaining and evolving them requires tackling the combinatorial explosion when re-training ML components, often exploring different (hyper-)parameters, features, training algorithms, or other ML artifacts. Addressing these problems requires systems-centric methods and tools. In this work, we discuss characteristics of multi-ML-model systems and challenges of engineering them. Inspired by such systems in the autonomous driving domain, our focus is on experiment-management tooling, which supports tracking and reasoning about the training process for ML models. Our analysis reveals their concepts, but also their limitations when engineering multi-ML-model systems, especially due to their model-centric focus. We discuss possible integration patterns and ML training to facilitate the effective and efficient development, maintenance, and evolution of multi-ML-model systems. Furthermore, we describe real-world multi-ML-model systems, providing early results from identifying and analyzing open-source systems from GitHub.
Evolution
ML Asset Management
Maintenance
ML-Enabled Systems
ML Training