Similarities of Testing Programmed and Learnt Software
Paper in proceeding, 2023

This study examines to what extent the testing of traditional software components and machine learning (ML) models fundamentally differs or not. While some researchers argue that ML software requires new concepts and perspectives for testing, our analysis highlights that, at a fundamental level, the specification and testing of a software component are not dependent on the development process used or on implementation details. Although the software engineering/computer science (SE/CS) and Data Science/ML (DS/ML) communities have developed different expectations, unique perspectives, and varying testing methods, they share clear commonalities that can be leveraged. We argue that both areas can learn from each other, and a non-dual perspective could provide novel insights not only for testing ML but also for testing traditional software. Therefore, we call upon researchers from both communities to collaborate more closely and develop testing methods and tools that can address both traditional and ML software components. While acknowledging their differences has merits, we believe there is great potential in working on unified methods and tools that can address both types of software.

Software Testing

Machine Learning

Software Boundaries

Non-Duality

Software Engineering

Author

Felix Dobslaw

Mid Sweden University

Robert Feldt

Chalmers, Computer Science and Engineering (Chalmers), Software Engineering (Chalmers)

Proceedings - 2023 IEEE 16th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2023

78-81
9798350333350 (ISBN)

16th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2023
Dublin, Ireland,

Subject Categories

Software Engineering

Computer Science

Computer Systems

DOI

10.1109/ICSTW58534.2023.00025

More information

Latest update

7/11/2023