Associative Memories
Hopfield Model
The Hopfield model was proposed by John Hopfield of the California Institute of Technology during the early 1980s. The publication of his work in 1982 significantly contributed to the renewed interest in research in artificial neural networks. He showed how an ensemble of simple processing units can have fairly complex collective computational abilities and behavior.
The dynamics of the Hopfield model is different from that of the linear associator model in that it computes its output recursively in time until the system becomes stable. Below is a Hopfield model with six units, where each node is connected to every other node in the network.
Hopfield model
Unlike the linear associator model which consists of two layers of processing units, one serving as the input layer while the other as the output layer, the Hopfield model consists of a single layer of processing elements where each unit is connected to every other unit in the network other than itself. The connection weight matrix W of this type of network is square and symmetric, i.e., wij = wjifor i, j = 1, 2, ..., m. Each unit has an extra external input Ii. This extra input leads to a modification in the computation of the net input to the units:
for j = 1, 2, ..., m.
Unlike the linear associator, the units in the Hopfield model act as both input and output units. But just like the linear associator, a single associated pattern pair is stored by computing the weight matrix as follows:
where Yk = Xk
to store p different associated pattern pairs. Since the Hopfield model is an autoassociative memory model, patterns, rather than associated pattern pairs, are stored in memory.
After encoding, the network can be used for decoding. Decoding in the Hopfield model is achieved by a collective and recursive relaxation search for a stored pattern given an initial stimulus pattern. Given an input pattern X, decoding is accomplished by computing the net input to the units and determining the output of those units using the output function to produce the pattern X'. The pattern X' is then fed back to the units as an input pattern to produce the pattern X''. The pattern X'' is again fed back to the units to produce the pattern X'''. The process is repeated until the network stabilizes on a stored pattern where further computations do not change the output of the units.
If the input pattern X is an incomplete pattern or if it contains some distortions, the stored pattern to which the network stabilizes is typically one that is most similar to X without the distortions. This feature is called pattern completion and is very useful in many image processing applications.
During decoding, there are several schemes that can be used to update the output of the units. The updating schemes are synchronous (or parallel as termed in some literatures), asynchronous (or sequential), or a combination of the two (hybrid).
Using the synchronous updating scheme, the output of the units are updated as a group prior to feeding the output back to the network. On the other hand, using the asynchronous updating scheme, the output of the units are updated in some order (e.g. random or sequential) and the output are then fed back to the network after each unit update. Using the hybrid synchronous-asynchronous updating scheme, subgroups of units are updated synchronously while units in each subgroup updated asynchronously. The choice of the updating scheme has an effect on the convergence of the network.
Hopfield (1982) demonstrated that the maximum number of patterns that can be stored in the Hopfield model of m nodes before the error in the retrieved pattern becomes severe is around 0.15m. The memory capacity of the Hopfield model can be increased as shown by Andrecut (1972).
In spite of this seemingly limited memory capacity of the Hopfield model, several applications can be listed (Chaudhuri, Dai, deMenezes, Garcia, Poli, Soper, Suh, Tsai). Discussions on the application of the Hopfield model to combinatorial optimization problems can be found in (Siu, Suh).
Various widely available text and technical papers vy Hassoun, Hertz (et al), Hopfield, McEliece, Ritter, and Volk can be consulted for a mathematical discussion on the memory capacity of the Hopfield model.
In the discrete Hopfield model, the units use a slightly modified bipolar output function where the states of the units, i.e., the output of the units remain the same if the current state is equal to some threshold value:
for i = 1, 2, ..., m and where t denotes the discrete time.
An interesting property of recurrent type networks is that their state can be described by an energy function. The energy function is used to prove the stability of recurrent type networks. For the discrete Hopfield model with wii = 0 and wij = wji using the asynchronous updating scheme, the energy function E according to Hopfield (1982) is defined as:
where the local minima of the energy function correspond to the energy of the stored patterns. Hopfield (1982) has shown that the energy of the discrete Hopfield model decreases or remains the same after each unit update. Therefore, the network will eventually converge to a local minimum that corresponds to a stored pattern. The stored pattern to which the network converges depends on the input pattern and the connection weight matrix.
The energy of the discrete Hopfield model is bounded from below by:
for all Xk, k = 1, 2, ..., p. Since the energy is bounded from below, the network will eventually converge to a local minimum corresponding to a stored pattern.
The continuous Hopfield model is just a generalization of the discrete case. Here, the units use a continuous output function such as the sigmoid or hyperbolic tangent function. In the continuous Hopfield model, each unit has an associated capacitor Ci and resistance ri that model the capacitance and resistance of real neuron's cell membrane, respectively. Thus the state equation of each unit is now:
where
Just like in the discrete case, there is an energy function characterizing the continuous Hopfield model. The energy function due to Hopfield (1984) is given by:
where f is the output function of the units.
It can be shown that dE/dt £ 0 when wij = wji. Therefore, the energy of the continuous Hopfield model decreases or remains the same. The minimum energy of the continuous Hopfield model also exists using analogous computation as that in the discrete Hopfield model.