Unveiling Assumptions: Exploring the Decisions of AI Chatbots and Human Testers
Paper i proceeding, 2024

The integration of Large Language Models (LLMs) and chatbots introduces new challenges and opportunities for decision-making in software testing. Decision-making relies on a variety of information, including code, requirements specifications, and other software artifacts that are often unclear or exist solely in the developer's mind. To fill in the gaps left by unclear information, we often rely on assumptions, intuition, or previous experiences to make decisions. This paper explores the potential of LLM-based chatbots like Bard, Copilot, and ChatGPT, to support software testers in test decisions such as prioritizing test cases effectively. We investigate whether LLM-based chatbots and human testers share similar "assumptions"or intuition in prohibitive testing scenarios where exhaustive execution of test cases is often impractical. Preliminary results from a survey of 127 testers indicate a preference for diverse test scenarios, with a significant majority (96%) favoring dissimilar test sets. Interestingly, two out of four chatbots mirrored this preference, aligning with human intuition, while the others opted for similar test scenarios, chosen by only 3.9% of testers. Our initial insights suggest a promising avenue within the context of enhancing the collaborative dynamics between testers and chatbots.

Test Prioritization

Chatbots

Software Testing

Författare

Francisco Gomes

Software Engineering 1

AIware 2024 - Proceedings of the 1st ACM International Conference on AI-Powered Software, Co-located with: ESEC/FSE 2024

45-49
9798400706851 (ISBN)

1st ACM International Conference on AI-Powered Software, AIware 2024, co-located with the ACM International Conference on the Foundations of Software Engineering, FSE 2024
Porto de Galinhas, Brazil,

Ämneskategorier

Programvaruteknik

Människa-datorinteraktion (interaktionsdesign)

Datavetenskap (datalogi)

DOI

10.1145/3664646.3664762

Mer information

Senast uppdaterat

2024-09-17