High performance associative memory models and sign constraints
The consequences of imposing a sign constraint on the standard Hopfield architecture associative memory model, trained using perceptron like learning rules, is examined. Such learning rules have been shown to have capacity of at most half of their unconstrained versions. This paper reports experimental investigations into the consequences of constraining the sign of the network weights in terms of: capacity, training times and size of basins of attraction. It is concluded that the capacity is roughly half the theoretical maximum, the training times are much increased and that the attractor basins are significantly reduced in size.