What Impact Do My Preferences Have?
Paper i proceeding, 2024
Successful human-robot collaboration requires that humans can express their requirements and that they comprehend the decisions that robots make. Requirements in this context are often related to potentially conflicting quality objectives, such as performance, security, or safety. Humans tend to have preferences regarding how important different objectives are at different points in time.
[Question/problem]
Currently, preferences are often expressed based on assumptions of what importance level should be assigned to a quality objective at runtime. To assign meaningful preferences to quality objectives, it is important that humans understand the impact of these preferences on the behavior of a robot. To the best of our knowledge, there is yet no framework that supports the explanation-based elicitation of quality preferences.
[Principal ideas/results]
To address these needs, we have developed OBJUST, a framework that helps with the interactive elicitation of preferences for robot mission planning. [Contribution]
The framework relies on the specification of human preferences and contrastive explanations. We evaluated our framework in a study with 7 participants. Our results indicate that the visual and textual explanations of the generated robotic mission plans help humans better understand the impact of their preferences, which can facilitate the elicitation process.
contrastive explanation
elicitation
quality attributes
robot mission planning
Författare
Rebekka Wohlrab
Software Engineering 1
Michael Vierhauser
University of Innsbruck
Erik Nilsson
Göteborgs universitet
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
03029743 (ISSN) 16113349 (eISSN)
Vol. 14588 LNCS 111-1289783031573262 (ISBN)
Winterthur, Switzerland,
Ämneskategorier
Nationalekonomi
Programvaruteknik
Robotteknik och automation
Datavetenskap (datalogi)
DOI
10.1007/978-3-031-57327-9_7