Designing POMDP models of socially situated tasks
In this paper, a modelling approach is described that represents human-robot social interactions as partially observable Markov decision processes (POMDPs). In these POMDPs, the intention of the human is represented as an unobservable part of the state space, and the robot's own intentions are expressed through the rewards. The state transition structure for the models is created using action rules that capture the effects of the robot's actions, relate the human's behavior to their intentions, and describe the changing state of the environment. State transitions are modified using data from humans interacting with other humans. The policies obtained by solving these models are used to control a robot in a socially situated task with a human partner. These interactions are compared to those of human pairs performing the same task, demonstrating that this approach produces policies that exhibit natural and socially appropriate behavior.