Requirements and Atitudes towards Explainable AI in Law Enforcement
Paper i proceeding, 2024

Decision-making aided by Artifcial Intelligence in high-stakes domains such as law enforcement must be informed and accountable. Thus, designing explainable artifcial intelligence (XAI) for such settings is a key social concern. Yet, explanations are often misunderstood by end-users due to being overly technical or abstract. To address this, our study engaged with police employees in the Netherlands, who are users of a text classifer. We found that for them, usability and usefulness are of great importance in explanation design, whereas interpretability and understandability are less valued. Further, our work reports on how design elements included in machine learning model explanations are interpreted. Drawing from these insights, we contribute recommendations that guide XAI system designers to cater to the specifc needs of specialized users in high-stakes domains and suggest design considerations for machine learning model explanations aimed at domain experts.

domain experts

overviews of data interpretation

Explainable artifcial intelligence

law enforcement

interviews

Författare

Elize Herrewijnen

Universiteit Utrecht

Meagan Loerakker

Chalmers, Data- och informationsteknik, Interaktionsdesign och Software Engineering

Marloes Vredenborg

Universiteit Utrecht

Paweł W. Woźniak

Chalmers, Data- och informationsteknik, Interaktionsdesign och Software Engineering

Technische Universität Wien

Proceedings of the 2024 ACM Designing Interactive Systems Conference, DIS 2024

995-1009
9798400705830 (ISBN)

2024 ACM Designing Interactive Systems Conference, DIS 2024
Copenhagen, Denmark,

PAPACUI: Färdighetsbaserad anpassning i interaktiva system för fysisk aktivitet

Vetenskapsrådet (VR) (2022-03196), 2023-01-01 -- 2026-12-31.

Ämneskategorier

Människa-datorinteraktion (interaktionsdesign)

Datavetenskap (datalogi)

DOI

10.1145/3643834.3661629

Mer information

Senast uppdaterat

2024-08-13