Interactive Policy Shaping for Human-Robot Collaboration with Transparent Matrix Overlays

Abstract

One important aspect of effective human–robot collaborations is the ability for robots to adapt quickly to the needs of humans. While techniques like deep reinforcement learning have demonstrated success as sophisticated tools for learning robot policies, the fluency of human-robot collaborations is often limited by these policies' inability to integrate changes to a user’s preferences for the task. To address these shortcomings, we propose a novel approach that can modify learned policies at execution time via symbolic if-this-then-that rules corresponding to a modular and superimposable set of low-level constraints on the robot’s policy. These rules, which we call Transparent Matrix Overlays, function not only as succinct and explainable descriptions of the robot’s current strategy but also as an interface by which a human collaborator can easily alter a robot’s policy via verbal commands. We demonstrate the efficacy of this approach on a series of proof-of-concept cooking tasks performed in simulation and on a physical robot.

Publication
Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI), March 2023
Kate Candon
Kate Candon
PhD Student, Computer Science

My research interests include human-computer interaction, artificial intelligence, machine learning, human-robot interaction, and socially assistive robots.