Emotional Understanding through Interactive Role-Play using Robots: Iolanda Leite
The purpose of this project is to develop a deeper understanding of how robots can engage small groups of children through interactive storytelling, with the ultimate goal of helping children to improve their social and emotional skills. Children interact with a pair of robots playing out stories around The Feeling Words curriculum, the second phase of an emotional skills paradigm, RULER, that emphasizes the ability to describe and understand the full range of human emotions. By practicing hypothetical decision-making in interactive scenarios based on these feeling words, students have the opportunity to see the effects of their decisions played out before them, without first having to make these decisions in the real world.
We started by investigating whether the advantages of one-to-one tutoring can also apply to one-to-many instruction, and what costs might be incurred when this shift happens. There are numerous reasons why having multiple students instead of one interact with a robot at a time is favorable, including limitations of cost, time and space. I conducted a repeated interaction study where a single child or a group of three children interacted with these robots. Results show that although the individual interactions increased participants’ story recall abilities compared to the group condition, the emotional interpretation of the story seemed more dependent on the difficulty level rather than the study condition. These findings suggest that, despite the type of interaction, interactive narratives with multiple robots are a promising approach to foster children’s development of social-related skills (Leite et al., 2015a).
The number of people around a robot not only affects how the robot should behave, but also how it should perceive the environment. For this reason, most data-driven perceptual systems for social robots rely on data collected in the same context where future interactions are likely to occur. However, even in the same context, conditions may change (e.g., depending on the number of users around the robot). So far, little is known about how data-driven models perform when they are tested in a group size different than the one they were trained on, yet this feature is critical for some perception problems. The way the robot should interpret a glance to the side is different if the user is alone or in a group. I provided a first investigation into the effects of changing group size in data-driven perception models by analyzing how a machine learning based model trained with data collected from participants interacting alone with robots performs in test data collected from group interactions, and vice-versa. These experiments have been carried out in the context of predicting disengagement behaviors in children interacting with social robots. My results showed that a model trained with group data generalizes better to individual participants than the other way around. The mixed model combining data from individual and group interactions is a good compromise, but it does not achieve the performance levels of the models trained for a specific type of interaction (Leite et al., 2015b).