Explainable and Interpretable Decision-Making for Robotic Tasks
Licentiate thesis, 2022
In this thesis, we consider different types of failures, such as task recognition errors and task execution failures. Our first goal is an interpretable approach to learning from human demonstrations (LfD), which is essential for robots to learn new tasks without the time-consuming trial-and-error learning process. Our proposed method deals with the challenge of transferring human demonstrations to robots by an automated generation of symbolic planning operators based on interpretable decision trees. Our second goal is the prediction, explanation, and prevention of robot task execution failures based on causal models of the environment. Our contribution towards the second goal is a causal-based method that finds contrastive explanations for robot execution failures, which enables robots to predict, explain and prevent even timely shifted action failures (e.g., the current action was successful but will negatively affect the success of future actions). Since learning causal models is data-intensive, our final goal is to improve the data efficiency by utilizing prior experience. This investigation aims to help robots learn causal models faster, enabling them to provide failure explanations at the cost of fewer action execution experiments.
In the future, we will work on scaling up the presented methods to generalize to more complex, human-centered applications.
Failure explanation
Causality
Explainability
Interpretability
Author
Maximilian Diehl
Chalmers, Electrical Engineering, Systems and control
Why Did I Fail? a Causal-Based Method to Find Explanations for Robot Failures
IEEE Robotics and Automation Letters,;Vol. In Press(2022)
Journal article
Automated Generation of Robotic Planning Domains from Observations
IEEE International Conference on Intelligent Robots and Systems,;(2021)p. 6732-6738
Paper in proceeding
Diehl Maximilian, Ramirez-Amaro Karinne, “A Causal-based Approach to Explain, Predict and Prevent Failures in Robotic Tasks”. Conditionally accepted with minor revisions to Robotics and Autonomous Systems (RAS), Elsevier, 2022.
Ramirez-Amaro Karinne, “Transferable Priors for Bayesian Network Parameter Estimation in Robotic Tasks”. Submitted to IEEE International Conference on Robotics and Automation (ICRA), 2023.
Learning & Understanding Human-Centered Robotic Manipulation Strategies
Chalmers AI Research Centre (CHAIR), 2020-01-13 -- 2025-01-14.
Areas of Advance
Information and Communication Technology
Subject Categories
Information Science
Robotics
Computer Science
Publisher
Chalmers
EC
Opponent: Daniel Leidner, DLR (German Aerospace Center), Germany