A reward driven connectionist model of cognitive development
Peters, L.; Davey, N.; Smith, Pamela; Messer, D.J.
Citation: Peters , L , Davey , N , Smith , P & Messer , D J 1999 , A reward driven connectionist model of cognitive development . in S Bagnara (ed.) , Procs of the European Conf on Cognitive Science . pp. 491-496 .
Children learn many skills under self-supervision where exemplars of target responses are not available. Connectionist models which rely on supervised learning are therefore not appropriate for modelling all forms of cognitive development. A task in this class, for which considerable data has been gathered in relationship to Karmiloff-Smith’s Model of Representational Redescription (RR) (Karmiloff-Smith, 1973, 1992); is one in which children learn through trial and error to balance objects. Data from these studies have been used to derive a training set and a new approach to modelling cognitive development has been taken in which learning through a dual backpropagation network (Munro, 1987) is reward-driven. Results have shown that the model can successfully learn and simulate aspects of children’s behaviour without explicit training information being defined. This approach however is incapable of modelling all levels of the RR Model.
This item appears in the following Collection(s)
Your requested file is now available for download. You may start your download by selecting the following link: test