Models with verbally enunciated explanations: Towards safe, accountable, and trustworthy artificial intelligence
Paper in proceeding, 2024
that are capable of generating a verbally enunciated explanation of their actions, such that
the explanation is also correct by construction. The possibility of obtaining a human-understandable, verbal
explanation of any action or decision taken by an AI system is highly desirable, and is becoming increasingly
important at this time when many AI systems operate as inscrutable black boxes. We describe the desirable
properties of the proposed systems, contrasting them with existing AI approaches. We also discuss limitations
and possible applications. While the discussion is mostly held in general terms, we also provide a specific
example of a completed system, as well as a few examples of ongoing and future work.
interpretability
artificial intelligence
accountability and safety
Author
Mattias Wahde
Chalmers, Mechanics and Maritime Sciences (M2), Vehicle Engineering and Autonomous Systems
International Conference on Agents and Artificial Intelligence
21843589 (ISSN) 2184433X (eISSN)
Vol. 3 101-108978-989-758-680-4 (ISBN)
Rome, Italy,
Subject Categories
Other Computer and Information Science
Language Technology (Computational Linguistics)
Philosophy
Computer Science
Areas of Advance
Transport
Roots
Basic sciences
DOI
10.5220/0012307100003636