Evaluation of Robot Imitation Attempts: Comparison of the System’s and the Human’s Perspectives
Imitation is a powerful learning tool when humans and robots bots interact in a social context. A series of experimental runs and a small pilot user study were conducted to evaluate the performance of a system designed for robot imitation. Performance assessments of similarity of imitative behaviours were carried out by machines and by humans: the system was evaluated quantitatively (from a machine- centric perspective) and qualitatively (from a human perspective) in order to study the reconciliation of these views. The experimental results presented here illustrate how the number of exceptions can be used as a performance measure by a robotic or software imitator of an object manipulation behaviour. (In this context, exceptions are events when the optimal displacement and/or rotation that minimize the dissimilarity metrics used to generate a corresponding imitative behaviour cannot be directly achieved in the particular context.) Results of the user study giving similarity judgments on imitative behaviours were used to examine how the quantitative measure of the number of exceptions (from a robot’s perspective) corresponds to the qualitative evaluation of similarity (from a human’s perspective) for the imitative behaviours generated by the jabberwocky system. Results suggest that there is a good alignment between this quantitive system-centered assessment and the more qualitative human-centered assessment of imitative performance.