|dc.description.abstract||In animals, humans and robots, imitative behaviours are very useful for acting, learning and communicating. Implementing imitation in autonomous robots is still a challenge and one of the main problems is to make them choose when and who to imitate.
We start from minimalist architectures, following a bottom-up approach, to progressively complete them. Based on imitation processes in nature, many architectures have been developed and implemented to increase quality (essentially in terms of reproducing actions with accuracy) of imitation in robots. Nevertheless, autonomous robots need architectures where imitative behaviour is well integrated with the other behaviours like seeking for stability, exploration or exploitation. Moreover, whether to express imitative behaviours or not should also depend on the history of interactions (positive or negative) between robots and their interactive partners.
In this thesis, we show with real robots how low-level imitation can emerge from other essential behaviours and how affect can modulate the way they are exhibited. On top of proposing a novel vision of imitation, we show how agents can autonomously switch between these behaviours depending on affective bonds they have developed. Moreover, with simple architectures, we are able to reproduce behaviours observed in nature, and we present a new way to tackle the issue of learning at different time scales in continuous time and space with discretization.||en