An Approach for Programming Robots by Demonstration: Generalization Across Different Initial Configurations of Manipulated Objects
Imitation is a powerful learning tool that can be used by a robotic agent to socially learn new skills and tasks. One of the fundamental problems in imitation is the correspondence problem, how to map between the actions, states and effects of the model and imitator agents, when the embodiment of the agents is dissimilar. In our approach, the matching depends on different metrics and granularity. Focusing on object manipulation and arrangement demonstrated by a human, this paper presents JABBERWOCKY, a system that uses different metrics and granularity to produce action command sequences that when executed by an imitating agent can achieve corresponding effects (manipulandum absolute/relative position, displacement, rotation and orientation). Based on a single demonstration of an object manipulation task by a human and using a combination of effect metrics, the system is shown to produce correspondence solutions that are then performed by an imitating agent, generalizing with respect to different initial object positions and orientations in the imitator's workspace. Depending on the particular metrics and granularity used, the corresponding effects will differ (shown in examples), making the appropriate choice of metrics and granularity depend on the task and context.