Design for integrating explainable AI for dynamic risk prediction in prehospital IT systems
Paper in proceeding, 2023

Demographic changes in the West with an increasingly elderly population puts stress on current healthcare systems. New technologies are necessary to secure patient safety. AI development shows great promise in improving care, but the question of how necessary it is to be able to explain AI results and how to do it remains to be evaluated in future research. This study designed a prototype of eXplainable AI (XAI) in a prehospital IT system, based on an AI model for risk prediction of severe trauma to be used by Emergency Medical Services (EMS) clinicians. The design was then evaluated on seven EMS clinicians to gather information about usability and AI interaction.Through ethnography, expert interviews and literature review, knowledge was gathered for the design. Then several ideas developed through stages of prototyping were verified by experts in prehospital healthcare. Finally, a high-fidelity prototype was evaluated by the EMS clinicians. The primary design was based around a tablet, the most common hardware for ambulances. Two input pages were included, with the AI interface working as both an indicator at the top of the interface and a more detailed overlay. The overlay could be accessed at any time while interacting with the system. It included the current risk prediction, based on the colour codes of the South African Triage Scale (SATS), as well as a recommendation based on guidelines. That was followed by two rows of predictors, for or against a serious condition. These were ordered from left to right, depending on importance. Beneath this, the most important missing variables were accessible, allowing for quick input.The EMS clinicians thought that XAI was necessary for them to trust the prediction. They make the final decision, and if they can’t base it on specific parameters, they feel they can’t make a proper judgement. In addition, both rows of predictors and missing variables served as reminders of what they might have missed in patient assessment, as stated by the EMS clinicians to be a common issue. If given a prediction from the AI that was different from their own, it might cause them to think more about their decision, moving it away from the normally relatively automatic process and likely reducing the risk of bias.While focused on trauma, the overall design was created to be able to include other AI models as well. Current models for risk prediction in ambulances have so far not seen a big benefit of using artificial neural networks (ANN) compared to more transparent models. This study can help guide the future development of AI for prehospital healthcare and give insights into the potential benefits and implications of its implementation.

usability

it system

risk prediction

ambulance

Healthcare

AI

explainable AI

decision making

interaction design

XAI

UX

tablet

Author

David Wallstén

Chalmers, Computer Science and Engineering (Chalmers), Interaction Design and Software Engineering

Gregory Axton

Chalmers, Computer Science and Engineering (Chalmers), Interaction Design and Software Engineering

Eunji Lee

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Anna Bakidou

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Bengt-Arne Sjöqvist

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Stefan Candefjord

Chalmers, Electrical Engineering, Signal Processing and Biomedical Engineering

Artificial Intelligence, Social Computing and Wearable Technologies

Vol. 113 268-278
978-1-958651-89-6 (ISBN)

Human Factors in Design, Engineering, and Computing for All. (AHFE 2023 Hawaii Edition)
Honolulu, USA,

Areas of Advance

Information and Communication Technology

Subject Categories

Medical Engineering

Electrical Engineering, Electronic Engineering, Information Engineering

Computer Science

DOI

10.54941/ahfe1004199

More information

Latest update

9/5/2024 1