Explainable and Interpretable Decision-Making for Robotic Tasks
Licentiate thesis, 2022

Future generations of robots, such as service robots that support humans with household tasks, will be a pervasive part of our daily lives. The human's ability to understand the decision-making process of robots is thereby considered to be crucial for establishing trust-based and efficient interactions between humans and robots. In this thesis, we present several interpretable and explainable decision-making methods that aim to improve the human's understanding of a robot's actions, with a particular focus on the explanation of why robot failures were committed.
In this thesis, we consider different types of failures, such as task recognition errors and task execution failures. Our first goal is an interpretable approach to learning from human demonstrations (LfD), which is essential for robots to learn new tasks without the time-consuming trial-and-error learning process. Our proposed method deals with the challenge of transferring human demonstrations to robots by an automated generation of symbolic planning operators based on interpretable decision trees. Our second goal is the prediction, explanation, and prevention of robot task execution failures based on causal models of the environment. Our contribution towards the second goal is a causal-based method that finds contrastive explanations for robot execution failures, which enables robots to predict, explain and prevent even timely shifted action failures (e.g., the current action was successful but will negatively affect the success of future actions). Since learning causal models is data-intensive, our final goal is to improve the data efficiency by utilizing prior experience. This investigation aims to help robots learn causal models faster, enabling them to provide failure explanations at the cost of fewer action execution experiments.
In the future, we will work on scaling up the presented methods to generalize to more complex, human-centered applications.

Failure explanation

Causality

Explainability

Interpretability

EC
Opponent: Daniel Leidner, DLR (German Aerospace Center), Germany

Author

Maximilian Diehl

Chalmers, Electrical Engineering, Systems and control

Why Did I Fail? a Causal-Based Method to Find Explanations for Robot Failures

IEEE Robotics and Automation Letters,;Vol. In Press(2022)

Journal article

Automated Generation of Robotic Planning Domains from Observations

IEEE International Conference on Intelligent Robots and Systems,;(2021)p. 6732-6738

Paper in proceeding

Diehl Maximilian, Ramirez-Amaro Karinne, “A Causal-based Approach to Explain, Predict and Prevent Failures in Robotic Tasks”. Conditionally accepted with minor revisions to Robotics and Autonomous Systems (RAS), Elsevier, 2022.

Ramirez-Amaro Karinne, “Transferable Priors for Bayesian Network Parameter Estimation in Robotic Tasks”. Submitted to IEEE International Conference on Robotics and Automation (ICRA), 2023.

Learning & Understanding Human-Centered Robotic Manipulation Strategies

Chalmers AI Research Centre (CHAIR), 2020-01-13 -- 2025-01-14.

Areas of Advance

Information and Communication Technology

Subject Categories

Information Science

Robotics

Computer Science

Publisher

Chalmers

EC

Opponent: Daniel Leidner, DLR (German Aerospace Center), Germany

More information

Latest update

10/26/2023