Hello! Hi
I’m a third-year Ph.D. student in Computer Science at Yale University. I’m working with Professor Marynel Vázquez in the Interactive Machines Group and Professor Brian Scassellati in the Social Robotics Lab.
I am interested in understanding how we can create interactive agents that are more effectively able to help people. My current research explores techniques to leverage multimodal implicit feedback humans provide naturally during interactions. In the future, I want to explore how to use these techniques to better understand how and when agents should ask for explicit feedback from humans during interactions.
I am excited about creating situated agents that can reason about and adapt to the preferences of the humans they interact with, creating more positive experiences for users. In particular, I want to build robots that empower seniors to remain independent by changing the way robots learn how to help.
PhD in Computer Science, 2020-
Yale University
BSc in Mathematics with Computer Science, 2012-2016
Massachusetts Institute of Technology
Recent research in robot learning suggests that implicit human feedback is a low-cost approach to improving robot behavior without the typical teaching burden on users. Because implicit feedback can be difficult to interpret, though, we study different methods to collect fine-grained labels from users about robot performance across multiple dimensions, which can then serve to map implicit human feedback to performance values. In particular, we focused on understanding the effects of annotation order and frequency on human perceptions of the self-annotation process and the usefulness of the labels for creating data-driven models to reason about implicit feedback. Our results demonstrate that different annotation methods can influence perceived memory burden, annotation difficulty, and overall annotation time. Based on our findings, we conclude with recommendations to create future implicit feedback datasets in Human-Robot Interaction.