Towards Human-Like Automated Test Generation: Perspectives from Cognition and Problem Solving
Paper in proceeding, 2021

Automated testing tools typically create test cases that are different from what human testers create. This often makes the tools less effective, the created tests harder to understand, and thus results in tools providing less support to human testers. Here, we propose a framework based on cognitive science and, in particular, an analysis of approaches to problem solving, for identifying cognitive processes of testers. The framework helps map test design steps and criteria used in human test activities and thus to better understand how effective human testers perform their tasks. Ultimately, our goal is to be able to mimic how humans create test cases and thus to design more human-like automated test generation systems. We posit that such systems can better augment and support testers in a way that is meaningful to them.

cognitive psychology

automated test generation

mental model

cognition

test design

behavioural aspects

software testing

psychology

test automation

cognitive science

problem solving

Author

Eduard Enoiu

Mälardalens högskola

Robert Feldt

Chalmers, Computer Science and Engineering (Chalmers), Software Engineering (Chalmers), Software Engineering for Testing, Requirements, Innovation and Psychology

Proceedings - 2021 IEEE/ACM 13th International Workshop on Cooperative and Human Aspects of Software Engineering, CHASE 2021

123-124 9463255

13th IEEE/ACM International Workshop on Cooperative and Human Aspects of Software Engineering, CHASE 2021
Virtual, Online, ,

Subject Categories

Interaction Technologies

Information Science

Human Computer Interaction

DOI

10.1109/CHASE52884.2021.00026

More information

Latest update

9/16/2021