Elena Corina Grigore successfully defends thesis

August 28, 2018

Lab member Elena Corina Grigore defended her disertaiton “Learning Supportive Behaviors for Adaptive Robots in Human-Robot Collaboration” today. Corina moves on to a research position at the self-driving car company NuTonomy.


Robotic systems deployed in industry today work in isolation from humans, and perform precise, repetitive tasks, based on well-defined plans and known task structures. The field of human-robot collaboration (HRC) focuses on endowing robots with capabilities to work alongside humans and help them achieve various tasks in different settings. Whether it be in manufacturing settings such as factories, public spaces such as restaurants, or in our homes, robots should adapt to the task at hand and learn from the user how to best assist. Although robotics has seen recent successes like quickly learning certain manipulation tasks, robots are still far from autonomously carrying out high dexterity tasks. Even beyond manipulation issues, the level of task knowledge necessary for a robot to autonomously build a piece of furniture, for example, is extremely high and not straightforward to acquire.

We are interested in leveraging machine learning techniques to create adaptive robots capable of learning useful behaviors in dynamic and collaborative environments. This requires the robot to learn from both the people it interacts with and its environment. Learning models of the task, understanding a person’s actions throughout the progression of the task, and learning and predicting user-based preferences for tailored assistance stand at the basis of adaptive robots in HRC. Applying such techniques to complex state and action spaces—raw sensor data at the low-level, and abstract states at the high-level—involves considerable challenges, especially when considering human-in-the-loop scenarios. To create a useful system, we turn our attention towards semi-autonomous robots that aim to learn how to provide supportive behaviors to a person during a task, and not complete the task autonomously themselves. Such assistance is meant to help the person complete the task more efficiently, while allowing the human and the robot to perform those actions for which they are best suited.

In this thesis, we present novel models and paradigms for HRC that allow a robot to learn how to provide assistance to a human worker throughout the execution of a task. We first focus on learning about action primitives (building blocks of motion) that a human worker performs during a physical task. We present a framework that discovers whether a coarser- or a finer-grained level is better suited for a primitive given the task at hand, coining this concept thegranularity level of a primitive.

We then present models for high-level learning, where the robot learns what supportive behaviors to offer throughout the execution of a task in which both the human and the robot are involved. To do so, we introduce personalized models of user supportive behavior preferences, which we build atop of a single, cross-users model of the task. The personalized models leverage this task representation, and only require as few as five demonstrations of the task labeled by the user in order to train.

We further compare this model-based technique with a model-free variant inspired by multi-agent reinforcement learning. We present two novel HRC paradigms that are multi-agent based, where we consider both the robot and the human as agents operating in the environment. These paradigms are multi-agent based since the human is indeed an agent in the system, but one whose actions we do not control. We introduce the Multi-Agent Based Reinforcement Learning (MAB-RL) and the Hierarchical Multi-Agent Based Reinforcement Learning (HMAB-RL) paradigms, and present a total of four algorithms based on these paradigms. We show that we can learn a supportive behavior preference set for the task on par with human-level performance from 40 episodes, with varying episode lengths dictated by whether we employ macro-actions.

Finally, we present work on the social aspect of interactions within HRC. Alongside gaining an understanding of how to act in a useful manner to help accomplish the task as a team, social collaboration is another important facet of HRC. An adaptive robot needs to facilitate task progression by keeping the human engaged and motivated throughout. Thus, we present a series of studies that set out to explore different tools for a robot to maintain high engagement during collaborative tasks and engender positive user perceptions and reactions.

As we move towards an era where we envision robots becoming widely used in a variety of settings, developing robust techniques for such robots to learn how to usefully interact and collaborate with people becomes critical. The algorithms and paradigms presented in this thesis contribute to the aim of developing more intelligent, useful, and adaptive robots, capable of offering assistance tailored to humans’ needs and preferences.

Advisor: Brian Scassellati

Other committee members:

Dana Angluin

Marynel Vazquez

Drew McDermott

Maya Cakmak (University of Washington)