Lab member Jake Brawer defended his dissertation “Fusing Symbolic and Subsymbolic Approaches for Natural and Effective Human-Robot Collaboration” today.
Human-robot collaboration (HRC) is a field that studies how to combine the strengths of humans and robots to perform joint tasks. A crucial element of any successful collaboration is the ability for collaborators to flexibly adapt to each others’ needs. In the context of HRC, this not only implies that robots can modify their behavior in-the-moment to directives issued by human users, but also that they can rapidly acquire the skills necessary for the task at hand. In turn, the knowledge acquired by a collaborative robot should be transparent and easily accessible, allowing users to adapt their behavior based on the robot’s capabilities and limitations. Ideally, we could design these systems using the remarkably powerful, data-driven tools developed by the machine learning community such as deep learning. However, the black-box, subsymbolic, nature of many of these techniques means that they lack the requisite adaptability and transparency for HRC. Traditional symbolic reasoning systems, in contrast, tend to produce easily interpretable and adaptable systems, however, they often lack the scalability and flexibility offered by machine learning methods. We believe that the ideal HRC framework exists at the intersection of these symbolic and subsymbolic traditions.
In this thesis, we describe methods for leveraging symbolic and subsymbolic knowledge approaches for improving the naturalness, fluency, and flexibility of HRC. Given the importance of manufacturing as an application domain for HRC, we first focus on improving a robot’s ability to use and reason about tools. In particular, we present a method by which a robot can learn symbolically instantiated cause-and-effect relations underlying tool use via self-supervised experimentation. The robot then uses these relations to construct tool affordance models, enabling it to effectively complete goal-directed tool-use tasks. Subsequently, we demonstrate how such models can be used to guide and improve the subsymbolic skill learning process. We not only show that our approach can quickly learn a wide variety of skills, but readily transfer them to novel tools and manipulated objects without additional training.
We also demonstrate how symbolic abstractions can improve the social components of HRCs. We show how symbolic models of context can augment the capabilities of language models to interpret naturalistic and situated user commands. We also demonstrate how predicate logic rules can act as a powerful interface between a user’s communicated intentions for a collaborative robot and the robot’s behavior. We show this first in the context of ownership norm learning. We demonstrate a method whereby logically encoded ownership norms can be used to constrain the robot’s behavior as well as to guide the inference of ownership relations of objects in a shared workspace. Finally, we develop a generalized framework for grounding user directives for a robot to mutable and composable logical rules. These rules then act as constraints on a robot’s reinforcement learning policy, enabling a user to immediately modify the robot’s behavior to their own ends. The work presented in this thesis contributes to the goal of creating intelligent, responsive, and capable robot collaborators.
Advisor: Brian Scassellati