Saturday, November 12, 2011

Associative Memories


Associative Memories

Bidirectional Associative Memory

Kosko (1988) extended the Hopfield model by incorporating an additional layer to perform recurrent autoassociations as well as heteroassociations on the stored memories.
The network structure of the Bidirectional Associative Memory model is similar to that of the linear associator but the connections are bidirectional, i.e., wij = wji, for i = 1, 2, ..., m and j = 1, 2, ...,n.  Also, the units in both layers serve as both input and output units depending on the direction of propagation.  Propagating signals from the X layer to the Y layer makes the units in the X layer act as input units while the units in the Y layer act as output units.  The same is true for the other direction, i.e., propagating from the Y layer to the X layer makes the units in the Y layer act as input units while the units in the layer act as output units.  Below is an illustration of the BAM architecture.

BAM model
Just like the linear associator and Hopfield model, encoding in BAM can be carried out by using:
Wk = XkTYk
to store a single associated pattern pair and
to simultaneously store several associated pattern pairs.
After encoding, the network can be used for decoding.  In BAM, decoding involves reverberating distributed information between the two layers until the network becomes stable.  In decoding, an input pattern can be applied either on the X layer or on the Y layer.  When given an input pattern, the network will propagate the input pattern to the other layer allowing the units in the other layer to compute their output values.  The pattern that was produced by the other layer is then propagated back to the original layer and let the units in the original layer compute their output values.  The new pattern that was produced by the original layer is again propagated to the other layer.  This process is repeated until futher propagations and computations do not result in a change in the states of the units in both layers where the final pattern pair is one of the stored associated pattern pairs.  The final pattern pair that will be produced by the network depends on the initial pattern pair and the connection weight matrix.
Several modes can also be used to update the states of the units in both layers namely synchronous, asynchoronous, and a combination of the two.  In synchronous updating scheme, the states of the units in a layer are updated as a group prior to propagating the output to the other layer.  In asyncrhonous updating, units in both layers are updated in some order and output are propagated to the other layer after each unit update.  Lastly, in synchronous-asynchronous updating, there can be subgroups of units in each layer that are updated synchronously while units in each subgroup are updated asynchronously.
Since the BAM also uses the traditional Hebb's learning rule to build the connection weight matrix to store the associated pattern pairs, it too has a severely low memory capacity.  The BAM storage capacity for reliable recall was given by Kosko (1988) to be less than minimum(mn), i.e., the minimum of the dimensions of the pattern spaces.  A more recent study by Tanaka et al (2000) on the relative capacity of the BAM using statistical physics reveals that for a system having n units in each of the two layers, the capacity is around 0.1998n.
  Discrete BAM
In a discrete BAM, the network propagates an input pattern X to the Y layer where the units in the Y layer will compute their net input using:
and determine their output values using:
for j = 1, 2, ..., n.
The pattern Y produced on the Y layer is then propagated back to the X layer allowing the units in the X layer compute their net input using:
and determine their output values by:
Just like the discrete Hopfield model, there is an energy function characterizing the state of the discrete BAM.  The energy function due to Kosko (1988) is given by:
It is shown by Kosko (1988) that the energy of the BAM decreases or remains the same after each unit update.  Therefore, the network will eventually converge to a local minimum that corresponds to a stored associated pattern pair.  The energy of the discrete BAM is bounded below by:
which guarantees that the network will eventually converge to a local minimum corresponding to a stored associated pattern pair.
   Continuous BAM
In the continuous BAM, the units use the sigmoid or hyperbolic tangent output function.  The units in the X layer have an extra external input Ii (not shown in the figure) while the units in the Y layer have an extra external input Jj (not shown in the figure) for i = 1, 2, ..., m and j = 1, 2, ..., n.  These extra external input lead to a modification in the computation of the net input to the units:

for i = 1, 2, ..., m and j = 1, 2, ..., n.
The energy function of the continuous BAM is given as:
It is shown by Kosko (1988) that D£ 0.  Therefore, the continuous BAM will eventually converge to a local minimum corresponding to a stored associated pattern pair.

0 comments: