To detect and explain robot failures, we developed an architecture comprised of robot and object profiles, behavior trees, assumption checkers, and large language models. Through human-subject evaluations we have determined that Proactive explanation systems, which predicatively detect and explain failures, are perceived as more intelligent and trustworthy and are more understandable, timely, and preferred compared to Reactive explanation systems which detect and explain failures after they have already occurred. These explanations were generated from explanation templates, which are often grammatically incorrect. By using LLMs we can generate grammatically correct explanations and answer follow up questions. To our surprise grammatically incorrect Templated explanations were perceived as similarly or more intelligent and trustworthy, more understandable, and preferred compared to Generative explanations. We believe this is because the Generative explanations sometimes used technical terminology, which was not completely understood by novices, showing the need to adapt explanations to the user.
Gregory LeMasurier, Christian Tagliamonte, Jacob Breen, Daniel Maccaline, and Holly A. Yanco. Templated vs. Generative: Explaining Robot Failures. In Proceedings of the 2024 IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Pasadena, CA, August 2024. [Videos of conditions]
Gregory LeMasurier, Alvika Gautam, Zhao Han, Jacob W. Crandall, and Holly A. Yanco. Reactive or Proactive? How Robots Should Explain Failures. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’24), Boulder, CO, March 2024.(24.9% acceptance rate) [Videos of conditions]
Christian Tagliamonte*, Daniel Maccaline*, Gregory LeMasurier, and Holly A. Yanco. A Generalizable Architecture for Explaining Robot Failures Using Behavior Trees and Large Language Models. Late Breaking Report, In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’24), Boulder, CO, March 2024.
Gregory LeMasurier, Alvika Gautam, Zhao Han, Jacob Crandall, Holly Yanco. “Why Didn’t I Do It?” A Study Design to Evaluate Robot Explanations. ACM/IEEE HRI 2022 Workshop on Workshop YOUR study design! Participatory critique and refinement of participants’ studies, March 2022.
Zhao Han, Jordan Allspaw, Gregory LeMasurier, Jenna Parrillo, Daniel Giger, S. Reza Ahmadzadeh, and Holly A. Yanco. Towards Mobile Multi-Task Manipulation in a Confined and Integrated Environment with Irregular Objects. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2020, June 2020.