Developing Social Robots with Empathetic Non-Verbal Cues Using Large Language Models
Published in Proceedings of the 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2023
Recommended citation: Lee, YK., Jung, Y, Kang, K., & Hahn, S. (2023). Developing Social Robots with Empathetic Non-Verbal Cues Using Large Language Models. Late Breaking Report at 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN).
A social robot must establish trust so that humans feel comfortable sharing their feelings and thoughts. This necessitates robots conveying a comprehensive understanding through in-depth communication with the user. In-depth communication requires active listening and the use of various non-verbal social cues (e.g., gestures). This gets more challenging for robots, and they need to combine spoken communication with non-verbal social cues, such as body language and speech tone, to gain user trust.
Although a significant amount of human-robot interaction (HRI) research has explored the impact of non-verbal cues on improving communication with humans, the question of how to incorporate these into robots’ cognitive systems is less examined, especially within the realms of natural language processing (NLP) and artificial intelligence (AI). This gap leads to our central question: can we enhance empathy and active listening in such scenarios by developing a unified cognitive system that incorporates a large language model and proposes suitable non-behavioral cues in a given situation?
We proposed augmenting the empathetic capacities of social robots by integrating non-verbal cues. Our primary contribution is the design and labeling of four types of empathetic non-verbal cues, abbreviated as SAFE: Speech, Action (gesture), Facial expression, and Emotion, in a social robot.