Debasmita Ghose successfully defends thesis

February 20, 2026

Lab member Debasmita Ghose defended her dissertation “Robots Adapting to People Without Becoming a Nuisance” today. 

Abstract:

Human-robot collaboration requires robots to maintain shared understanding with people. The robot’s internal belief about a human partner’s goal must remain aligned with what the human is actually trying to accomplish, despite ambiguity, limited explicit communication, and changing objectives. In human-human collaboration, people rarely provide step-by-step instructions to their collaborators to establish a shared understanding of the task; instead, intent is conveyed implicitly through behavior and context, as well as brief, high-level language. Inspired by this idea, this dissertation develops methods that enable a robot to infer and adapt to human goals in human-robot collaboration using signals that arise naturally during interaction, while minimizing the need for frequent supervision or repeated explicit instruction.

The technical contributions are organized by increasing levels of human intervention and increasingly challenging sources of uncertainty. First, for adaptation from passive observation, we present an online contrastive representation-learning method that tailors visual object representations to the distinctions a human implicitly defines through their selections, enabling sample-efficient learning of task-relevant concepts in a shared sorting task. Second, for settings where observation alone is insufficient due to goal ambiguity, we introduce active approaches that enable a robot to take actions that influence the human to reveal their goals through behavioral cues. We introduce Critical Decision Points as states where policies for different possible goals diverge maximally, and use receding-horizon planning to actively guide interaction toward such informative states while maintaining task progress. Third, to handle non-stationary goals, we develop a goal-change detection method and propose that the robot takes actions that support the task while actively influencing the human to take actions that reveal their updated goals after the human’s objective shifts mid-execution. Finally, we introduce BALI (Bidirectional Action–Language Inference), which treats a person’s language and their observed behavior as coupled signals about their intended goal. BALI uses high-level directives to narrow the set of plausible goal interpretations in the current context, uses the sequence of actions and task history to ground and update those interpretations as the task unfolds, and asks targeted clarification questions only when the remaining uncertainty is too high to select a supportive action.

This dissertation contributes a set of mechanisms for maintaining shared understanding during human-robot collaboration through observation, active influence, and high-level natural language directives, enabling robots to provide context-appropriate assistance without requiring humans to manage them.

Advisor: Brian Scassellati

Other committee members:

Marynel Vázquez

Tesca Fitzgerald

Tom Silver (Princeton University)