High performance associative memory models and weight dilution
The consequences of diluting the weights of the standard Hopfield architecture associative memory model, trained using perceptron like learning rules, is examined. A proportion of the weights of the network are removed; this can be done in a symmetric and asymmetric way and both methods are investigated. This paper reports experimental investigations into the consequences of dilution in terms of: capacity, training times and size of basins of attraction. It is concluded that these networks maintain a reasonable performance at fairly high dilution rates.