Nicholas Georgiou successfully defends thesis

February 23, 2026

Lab member Nicholas Georgiou defended his dissertation “Investigating Human Perceptions and Responses to Robot Failures Across Functional, Social, and Moral Contexts in Human-Robot Interactions” today.

Abstract:

Robots are increasingly becoming a part of human environments, where they have the potential to assist people in their daily lives in an abundance of ways. However, when robots interact within these settings, they are bound to fail at some point. A robot may fail due to a wide variety of reasons, whether it be due to an uncertainty in the robot’s sensor reading, an inaccuracy in the robot’s world model, or due to the unpredictability of human behavior in the robot’s workspace. To design robots that can appropriately interact within environments filled with people, it is critical to understand how people perceive and respond to robots when they fail.

This dissertation investigates people’s perceptions and responses to robot failures in human-robot interaction contexts along three different dimensions: functional, social, and moral. To examine failures within these dimensions, we conduct a range of controlled, human-subjects experiments that begin with simple, task-based failures and that progressively move towards failures that have more social and moral implications.

In our first human-subjects experiment, we investigate how people provide feedback to a variety of task-based robot failures in a card-selection task. We find significant variation in how people evaluate the robot’s performance and show that this variance can influence how effectively the robot performs the task when trained with different feedback strategies.

Building on these findings, we conduct a pair of studies with children aged four to seven years and with adults, in order to investigate how user characteristics influence variability in people’s responses to robot failures. We find that users’ age, as well as a robot’s social responses following its own failures (i.e., providing incorrect advice), affects how people trust the failing robot.

Finally, we extend our investigation to a much more severe robot failure with inherent moral implications, in which a robot intentionally commits physical harm (i.e., pushing down a human). In this final experiment, we showcase the importance of prior expectations when people evaluate harmful behavior, as the ways in which the robot’s capabilities were framed before witnessing the failure significantly influenced people’s moral judgments of the robot after the failure.

Overall, this dissertation contributes to our knowledge of how people respond to robot failures and can help inform the design of robots that interact with regular people.

Advisor: Brian Scassellati

Other committee members:

Marynel Vázquez

Tesca Fitzgerald

Dražen Brščić (Kyoto University)