The Importance of Self-excitation in Spiking Neural Networks Evolved to Recognize Temporal Patterns
Biological and artificial spiking neural networks process information by changing their states in response to the temporal patterns of input and of the activity of the network itself. Here we analyse very small networks, evolved to recognize three signals in a specific pattern (ABC) in a continuous temporal stream of signals (..CABCACB..). This task can be accomplished by networks with just four neurons (three interneurons and one output). We show that evolving the networks in the presence of noise and variation of the intervals of silence between signals biases the solutions towards networks that can maintain their states (a form of memory), while the majority of networks evolved without variable intervals between signals cannot do so. We demonstrate that in most networks, the evolutionary process leads to the presence of superfluous connections that can be pruned without affecting the ability of the networks to perform the task and, if the unpruned network can maintain memory, so does the pruned network. We then analyse how these small networks can perform their tasks, using a paradigm of finite state transducers. This analysis shows that self-excitatory loops (autapses) in these networks are crucial for both the recognition of the pattern and for memory maintenance.