Why Did I Fail? a Causal-Based Method to Find Explanations for Robot Failures
Artikel i vetenskaplig tidskrift, 2022

Robot failures in human-centered environments are inevitable. Therefore, the ability of robots to explain such failures is paramount for interacting with humans to increase trust and transparency. To achieve this skill, the main challenges addressed in this paper are I) acquiring enough data to learn a cause-effect model of the environment and II) generating causal explanations based on the obtained model. We address I) by learning a causal Bayesian network from simulation data. Concerning II), we propose a novel method that enables robots to generate contrastive explanations upon task failures. The explanation is based on setting the failure state in contrast with the closest state that would have allowed for successful execution. This state is found through breadth-first search and is based on success predictions from the learned causal model. We assessed our method in two different scenarios I) stacking cubes and II) dropping spheres into a container. The obtained causal models reach a sim2real accuracy of 70% and 72%, respectively. We finally show that our novel method scales over multiple tasks and allows real robots to give failure explanations like “the upper cube was stacked too high and too far to the right of the lower cube.”

Task analysis

Acceptability and Trust

Stacking

Robot sensing systems

Probabilistic Inference

Planning

Learning from Experience

Robots

Data models

Bayes methods

Författare

Maximilian Diehl

Chalmers, Elektroteknik, System- och reglerteknik

Karinne Ramirez-Amaro

Chalmers, Elektroteknik, System- och reglerteknik

IEEE Robotics and Automation Letters

23773766 (eISSN)

Vol. 7 4 8925-8932

Ämneskategorier

Människa-datorinteraktion (interaktionsdesign)

Robotteknik och automation

DOI

10.1109/LRA.2022.3188889

Mer information

Senast uppdaterat

2024-03-07