Requirements and Atitudes towards Explainable AI in Law Enforcement
Paper in proceeding, 2024

Decision-making aided by Artifcial Intelligence in high-stakes domains such as law enforcement must be informed and accountable. Thus, designing explainable artifcial intelligence (XAI) for such settings is a key social concern. Yet, explanations are often misunderstood by end-users due to being overly technical or abstract. To address this, our study engaged with police employees in the Netherlands, who are users of a text classifer. We found that for them, usability and usefulness are of great importance in explanation design, whereas interpretability and understandability are less valued. Further, our work reports on how design elements included in machine learning model explanations are interpreted. Drawing from these insights, we contribute recommendations that guide XAI system designers to cater to the specifc needs of specialized users in high-stakes domains and suggest design considerations for machine learning model explanations aimed at domain experts.

domain experts

overviews of data interpretation

Explainable artifcial intelligence

law enforcement

interviews

Author

Elize Herrewijnen

Utrecht University

Meagan Loerakker

Chalmers, Computer Science and Engineering (Chalmers), Interaction Design and Software Engineering

Marloes Vredenborg

Utrecht University

Paweł W. Woźniak

Chalmers, Computer Science and Engineering (Chalmers), Interaction Design and Software Engineering

Vienna University of Technology

Proceedings of the 2024 ACM Designing Interactive Systems Conference, DIS 2024

995-1009
9798400705830 (ISBN)

2024 ACM Designing Interactive Systems Conference, DIS 2024
Copenhagen, Denmark,

PAPACUI: Proficiency Awareness in Physical ACtivity User Interfaces

Swedish Research Council (VR) (2022-03196), 2023-01-01 -- 2026-12-31.

Subject Categories

Human Computer Interaction

Computer Science

DOI

10.1145/3643834.3661629

More information

Latest update

8/13/2024