It is important for intelligent robots to be able to interpret human signals that provide context about how an interaction is going. We posit that including multiple facets of context,both situational and user-specific, in user models will improve a robot’s understanding of the context of their interactions. This position is supported by results from an exploratory study where humans interacted with an agent in a video game. As part of this work, we built contextual perception models that reasoned about nonverbal human reactions to prosocial assistance from the autonomous agent. Interestingly, our results showed the importance of contextualizing model predictions based on multiple factors. Future work will further examine the importance of the inclusion of the context of context, or context^2, in perception models to make intelligent predictions about nonverbal reactions through richer utilization of our existing data. Additionally, we plan on extending our study to situated human-robot interactions.