Similarities of Testing Programmed and Learnt Software
Paper i proceeding, 2023

This study examines to what extent the testing of traditional software components and machine learning (ML) models fundamentally differs or not. While some researchers argue that ML software requires new concepts and perspectives for testing, our analysis highlights that, at a fundamental level, the specification and testing of a software component are not dependent on the development process used or on implementation details. Although the software engineering/computer science (SE/CS) and Data Science/ML (DS/ML) communities have developed different expectations, unique perspectives, and varying testing methods, they share clear commonalities that can be leveraged. We argue that both areas can learn from each other, and a non-dual perspective could provide novel insights not only for testing ML but also for testing traditional software. Therefore, we call upon researchers from both communities to collaborate more closely and develop testing methods and tools that can address both traditional and ML software components. While acknowledging their differences has merits, we believe there is great potential in working on unified methods and tools that can address both types of software.

Software Testing

Machine Learning

Software Boundaries

Non-Duality

Software Engineering

Författare

Felix Dobslaw

Mittuniversitetet

Robert Feldt

Chalmers, Data- och informationsteknik, Software Engineering

Proceedings - 2023 IEEE 16th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2023

78-81
9798350333350 (ISBN)

16th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2023
Dublin, Ireland,

Ämneskategorier

Programvaruteknik

Datavetenskap (datalogi)

Datorsystem

DOI

10.1109/ICSTW58534.2023.00025

Mer information

Senast uppdaterat

2023-07-11