Supportive Behaviors for Human-Robot Teaming: Brad Hayes

To construct flexible systems that can adapt to the changing needs of daily operations, researchers have proposed to construct robots that learn basic skills which can be re-used in multiple tasks and under varying conditions. In the machine learning community, hierarchical learning (HL) is designed to support skill abstraction and to provide a level of portability to acquired knowledge that improves performance on future tasks. These systems attempt to abstract the critical aspects of a task from multiple demonstrations such that the learned skill can be applied under varying environmental conditions and across different human co-workers. 

While traditional hierarchical learning has been successfully applied to a number of real-world robotic tasks in which the robot acts autonomously and does not interact with humans, this method suffers a critical weakness when applied to collaborative scenarios. Division of responsibilities, role identification, and joint actions are left unaddressed, leaving many state-of-the-art hierarchical approaches inapplicable in collaborative domains. This work aims to address these weaknesses, doing so in the context of enabling robotic agents to support a human lead worker. Where traditional hierarchical learning and robot planning assumes an isolated, autonomous robot which learns and performs on its own (usually in a static environment!), this work develops collaborative robots that learn portable skills from human guidance, capable of engaging in tasks with human co-workers to improve their efficiency, safety, and effectiveness.

The primary contributions of this effort focus on collaborator intention modeling, planning under uncertainty, hierarchical task network construction, learning from demonstration, and multi-agent planning.