Generation of tactile maps for artificial skin
Abstract
Prior research has shown that representations of retinal surfaces can be learned from the intrinsic structure of visual sensory data in neural simulations, in robots, as well as by animals. Furthermore, representations of cochlear (frequency) surfaces can be learned from auditory data in neural simulations. Advances in hardware technology have allowed the development of artificial skin for robots, realising a new sensory modality which differs in important respects from vision and audition in its sensorimotor characteristics. This provides an opportunity to further investigate ordered sensory map formation using computational tools. We show that it is possible to learn representations of non-trivial tactile surfaces, which require topologically and geometrically involved three-dimensional embeddings. Our method automatically constructs a somatotopic map corresponding to the configuration of tactile sensors on a rigid body, using only intrinsic properties of the tactile data. The additional complexities involved in processing the tactile modality require the development of a novel multi-dimensional scaling algorithm. This algorithm, ANISOMAP, extends previous methods and outperforms them, producing high-quality reconstructions of tactile surfaces in both simulation and hardware tests. In addition, the reconstruction turns out to be robust to unanticipated hardware failure.