Test Maintenance for Machine Learning Systems: A Case Study in the Automotive Industry
Paper in proceeding, 2023

Machine Learning (ML) systems have seen widespread use for automated decision making. Testing is essential to ensure the quality of these systems, especially safety-critical autonomous systems in the automotive domain. ML systems introduce new challenges with the potential to affect test maintenance, the process of updating test cases to match the evolving system. We conducted an exploratory case study in the automotive domain to identify factors that affect test maintenance for ML systems, as well as to make recommendations to improve the maintenance process. Based on interview and artifact analysis, we identified 14 factors affecting maintenance, including five especially relevant for ML systems—with the most important relating to non-determinism and large input spaces. We also proposed ten recommendations for improving test maintenance, including four targeting ML systems—in particular, emphasizing the use of test oracles tolerant to acceptable non-determinism. The study’s findings expand our knowledge of test maintenance for an emerging class of systems, benefiting the practitioners testing these systems.

Software Testing

Automotive Software

Test Evolution

Test Maintenance

Machine Learning

Author

Lukas Berglund

University of Gothenburg

Tim Grube

University of Gothenburg

Gregory Gay

University of Gothenburg

Francisco Gomes

University of Gothenburg

Dimitrios Platis

Zenseact AB

2023 IEEE Conference on Software Testing, Verification and Validation (ICST)

2159-4848 (ISSN)

410-421
978-1-6654-5666-1 (ISBN)

2023 IEEE Conference on Software Testing, Verification and Validation (ICST)
Dublin, Ireland,

Subject Categories

Production Engineering, Human Work Science and Ergonomics

Software Engineering

Computer Systems

DOI

10.1109/ICST57152.2023.00045

More information

Latest update

7/27/2023