Explainable AI: A Diverse Stakeholder Perspective
Paper in proceeding, 2024

Artificial Intelligence (AI) is increasingly integral for doing classification and prediction tasks across various fields, including healthcare, legal systems, autonomous vehicles, and financial services [1]. As such, stakeholders such as system developers, system operators, end-users necessitate varying levels of explanations for the decisions proposed by these AI systems to enhance their trust and reliability in these systems, and use these systems in practice. The growing reliance on AI as a decision-support tool in these critical areas underscores the need for AI systems to be explainable development process and architecture, comprehensible to their users, ensuring their use is safe, responsible, and in compliance with legal standards.

Author

Umm-E-Habiba

University of Stuttgart

Khan Mohammad Habibullah

Software Engineering 1

University of Gothenburg

Proceedings of the IEEE International Conference on Requirements Engineering

1090705X (ISSN) 23326441 (eISSN)

494-495
9798350395112 (ISBN)

32nd IEEE International Requirements Engineering Conference, RE 2024
Reykjavik, Iceland,

Subject Categories

Computer and Information Science

Environmental Engineering

DOI

10.1109/RE59067.2024.00060

More information

Latest update

9/20/2024