Multidimensional Interpretable Learning for Experience-Driven Collaborative Robots
Research Project, 2026 – 2029

Collaborative Robots (Cobots) are expected to work in dynamic environments, make informed decisions, and explain their reasoning—particularly when facing novel situations. To achieve this level of autonomy, several open challenges must be addressed: (1) extracting high-level human intentions, (2) automatically learning and storing experiences, and (3) selecting appropriate collaborative actions while adapting in real time. This project introduces a novel, interpretable framework that unifies symbolic reasoning, memory-based learning, and adaptive control.We propose a multidimensional learning architecture where Cobots can decompose collaborative tasks, reason over their structure, and ground symbolic plans in executable skills. Our model integrates spatio-temporal planning operators and a dynamic memory system to help robots anticipate failure and recover based on prior experiences. By linking high-level intentions with low-level control, Cobots will be able to adapt to unexpected situations and support human teammates more effectively.This general-purpose approach will be validated in household and industrial use cases, offering a new level of explainable, experience-driven autonomy with broad societal impact.

Participants

Karinne Ramirez-Amaro (contact)

Chalmers, Electrical Engineering, Systems and control

Funding

Swedish Research Council (VR)

Project ID: 2025-06377
Funding Chalmers participation during 2026–2029

More information

Latest update

12/29/2025