Marie Bauer, Julia Gachot, Matthias Kerzel, Cornelius Weber, Stefan Wermter
This paper proposes integrating Theory of Mind (ToM) into Explainable AI (XAI) frameworks to improve human-robot interaction by making robots more user-friendly and their actions more interpretable.
The paper explores how robots can better understand and interact with humans by using a concept called Theory of Mind (ToM). ToM is a way for robots to guess what humans might be thinking or feeling, allowing them to respond more appropriately. The authors suggest that ToM can be used to make robots more explainable, helping people understand why robots act the way they do. They argue that current methods don't focus enough on the user's perspective and propose that integrating ToM with explainable AI could address this issue, making interactions with robots more intuitive and user-friendly.