Explainable and Interpretable Methods for Handling Robot Task Failures
Doctoral thesis, 2025

Robots are increasingly deployed in dynamic human environments. To avoid failures during task execution, such as setting a table, they must adapt to unexpected changes. This is challenging because robots must proactively predict failures and identify controllable factors to prevent them. If failures cannot be autonomously prevented, robots should explain failure causes, which is challenging because explanations should cater to non-expert users. The first goal of this thesis is therefore to enhance the reliability and explainability of robots by predicting, explaining, and preventing task execution failures using symbolic causal models. We introduce a novel framework for learning causal models from simulated data. To improve the transferability of the causal models between tasks, we propose three parameter transfer methods that leverage the semantic similarities between models. To enhance failure prediction, we propose a novel approach that combines the learned causal models with a breadth-first search procedure for proactive failure prediction and contrastive failure explanation. We validate this approach on object manipulation tasks, such as stacking cubes, achieving a 95% failure prevention rate. We then extend the method to predict human perceptions of a navigation robot's competence and improve its behavior, resulting in a 72% increase in perceived competence.

Another common failure is missing capabilities that hinder a robot from achieving its task goal. The second thesis goal is therefore to enable non-experts to assist by teaching robots the missing actions intuitively, without coding experience. We propose a novel demonstration system that lets users teach tasks in Virtual Reality. Our system automatically segments and classifies the demonstrations, generating symbolic, robot-agnostic actions that integrate into the robot's existing capabilities. Our approach achieves a 92% success rate in learning task abstractions from a single demonstration in single- and multi-agent tasks. Additionally, our approach enables robots to detect missing actions automatically, allowing users to demonstrate only the missing parts instead of the entire task, reducing demonstration time by 61%.

The presented contributions enable robots to handle dynamic environments more reliably and explainably while continuously expanding their capabilities to adapt to new challenges.

Failure Explanations

Robot Task Planning

Causality

EB lecture hall, EDIT building, Hörsalsvägen 11, Chalmers University of Technology, Gothenburg, Sweden
Opponent: Prof. Lars Kunze, University of the West of England, Bristol, UK

Author

Maximilian Diehl

Chalmers, Electrical Engineering, Systems and control

Automated Generation of Robotic Planning Domains from Observations

IEEE International Conference on Intelligent Robots and Systems,;(2021)p. 6732-6738

Paper in proceeding

Why Did I Fail? a Causal-Based Method to Find Explanations for Robot Failures

IEEE Robotics and Automation Letters,;Vol. 7(2022)p. 8925-8932

Journal article

A causal-based approach to explain, predict and prevent failures in robotic tasks

Robotics and Autonomous Systems,;Vol. 162(2023)

Journal article

Generating and Transferring Priors for Causal Bayesian Network Parameter Estimation in Robotic Tasks*

IEEE Robotics and Automation Letters,;Vol. 9(2024)p. 1011 -1018

Journal article

Learning Robot Skills From Demonstration for Multi-Agent Planning

IEEE International Conference on Automation Science and Engineering,;(2024)p. 2348-2355

Paper in proceeding

Enabling Robots to Identify Missing Steps in Robot Tasks for Guided Learning from Demonstration

Proceedings of the 2025 IEEE/SICE International Symposium on System Integration (SII),;(2025)p. 43-48

Paper in proceeding

Diehl Maximilian, Tsoi Nathan, Chavez Gustavo, Ramirez-Amaro Karinne, Vázquez Marynel, “A Causal Approach to Predicting and Improving Human Perceptions of Social Navigation Robots”

As robots become more integrated into daily life, assisting with household chores, supporting healthcare, and more, they must be able to operate in dynamic human environments. Unlike controlled factory settings, these environments are constantly changing, which can lead to failures that prevent robots from completing their tasks.

To function reliably, robots should anticipate and prevent failures before they occur by adjusting their actions proactively. When failures are unavoidable, they must be able to explain what went wrong in a way that is understandable to non-experts. The first goal of this thesis is, therefore, to enhance the reliability and explainability of robots by enabling them to predict, prevent, and explain failures. It introduces causal reasoning techniques to improve failure prediction and prevention while generating contrastive explanations for non-experts.

Another common issue is that robots may lack the necessary capabilities to achieve their goals. To address this, the second goal of this thesis is to enable non-experts to teach robots missing actions through demonstrations. To this end, we introduce a Virtual Reality-based teaching system, allowing users to intuitively demonstrate tasks without requiring programming expertise.

By improving failure handling and expanding robot capabilities, this work contributes to making robots more adaptable, reliable, and effective assistants in everyday environments.

Subject Categories (SSIF 2025)

Robotics and automation

ISBN

978-91-8103-177-5

Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 5635

Publisher

Chalmers

EB lecture hall, EDIT building, Hörsalsvägen 11, Chalmers University of Technology, Gothenburg, Sweden

Online

Opponent: Prof. Lars Kunze, University of the West of England, Bristol, UK

More information

Latest update

2/19/2025