24
Associative Memories A Morphological Approach by Jason M. Carey

homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

Associative MemoriesA Morphological Approach

by

Jason M. Carey

Semester Project: CS53919 December 2003

Page 2: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

Table of ContentsIntroduction 2

Motivation 2Morphological Associative Memories 3

Work Performed 5Experiment 5Data Collection 6Data Setup for Associative Memory 6Training the Associative Memory Models 7Testing the Associative Memory Models 8User Application 9

Results 9Effects of Distortion and Memory Size on Average Recall Rate 9Effects of Memory Size and Letter Font on Average Recall Rate 11

Discussion of Results 12

Conclusion 13

References 14

Appendix: Tables of Results 15Results of Memory Performance with 5 Images 15Results of Memory Performance with 10 Images 15Results of Memory Performance with 26 Images 16Results of Memory Performance with 52 Images 17

1

Page 3: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

Introduction

Motivation

Throughout the course of evolution, humans have acquired the ability to retrieve

information from applied associated stimuli. Amazingly, this ability is not hindered by

the perturbation of the original information-stimuli pair (i.e. recalling one’s relationship

with another after not seeing them for several years despite the other’s physical changes

such as aging, growing a beard, etc.). How the human brain efficiently organizes and

stores such vast amounts of information and is able to recall it given partial or incomplete

stimuli has brought forth much interest. Specifically, researchers have developed the

theoretical neural network model of the associative memory. One of the earliest variants

of this model was the linear associative memory. Despite its simplistic formulation, the

memory was severely limited in flexibility, robustness, and capacity. The primary

defects inherent in the memory were its requirement for orthogonality between stored

patterns and the condition for perfect recall only under negligible pattern distortion and a

constrained memory capacity of no more than the length of the memory [1].

In recent years, these limitations have been improved by the development of the

fully connected, feed-forward Hopfield network. Using this model, the capacity of the

associative memory has been shown to be , where n is the length of the memory [3].

Although the memory capacity of this model is less than that of the linear associative

model, there are no necessary conditions placed on the input for perfect recall and,

moreover, given substantial input distortion recall is still possible. Today, the Hopfield

network is one of the most popular methods for binary associative memories [2].

Unfortunately, even though the Hopfield model as an associative memory drastically

improved the performance of the linear associative memory, the limited capacity of this

network still made associative memories impractical.

This paper will attempt to push the limits of associative memories still further by

using the radically different model of the morphological neural network (MNN).

Historically, neural networks have assumed the biological phenomena that the strength of

2

Page 4: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

the electric potential of a signal traveling along an axon is the result of a multiplicative

process where the mechanism of the postsynaptic membrane of a neuron adds the various

potentials of electrical impulses to determine activation [4]. However, MNNs are

formulated on the belief that the strength of the electric potential is an additive process

where the postsynaptic membrane only accepts signals of a certain maximum strength

[4]. Mathematically, this causes a change from the historically used ring structure

consisting of operators: addition and multiplication to the ring structure using operators:

addition and minimum/maximum. Ultimately, the power of MNNs as a model for

associative memories lies in their ability to store as many patterns as can be represented

(2n in the binary case) with the ability to recall partial or incomplete patterns in one step

convergence [5].

Morphological Associative Memories

An associative memory is a system such that when given input x produces output

y. That is, the memory associates x with y. An auto-associative memory is an

associative memory such that . Furthermore, a binary associative memory is an

associative memory containing strictly binary values. Using neural networks, associative

memories are able to recall the desired information given partial or incomplete inputs [5].

The following is a brief formulation to the morphological associative memory taken from

[5].

Morphological associative memories require the basic morphological operations of

max product and min product. The max product, , is defined for matrices

A and B as:

,

where is the maximum operator. Similarly, the min product, , is defined

for matrices A and B as:

,

3

Page 5: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

where is the minimum operator. Using these operations, a morphological associative

memory can be constructed using two separate memories, and . Here, memory is

used for input patterns that are dilated versions of a trained pattern. Informally, dilation

means to expand the image. That is, for a black and white image, dilation would cause

the image to contain more black pixels. Memory associating input pattern matrix X

with output pattern matrix Y, is defined as follows:

where xi and yi are the ith patterns of the pattern matrices X and Y, respectively. In

contrast, memory is used for input patterns that are eroded versions of a trained

pattern. Informally, erosion means to contract the image. Again, in the case of a black

and white image, erosion would cause the image to contain less black pixels. Memory

associating input pattern matrix X with output pattern matrix Y, is defined as follows:

where xi and yi are the ith patterns of the pattern matrices X and Y, respectively.

However, even with memories and the morphological memory is not useful. This

is because patterns are generally distorted both by dilation and erosion, which neither

or can sufficiently handle. The recent solution to this is to construct a kernel matrix

which in conjunction with memories and would allow morphological associative

memories to recall general distorted patterns. The kernel matrix, is defined as a matrix

satisfying the following conditions:

A sufficient theorem for determining a kernel for binary associative memories is given by

Ritter and Sussner [5]:

Theorem:

Let X, Y, Z be binary patterns with Z X. If

4

Page 6: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

where means the ith bit of the xth pattern, then Z is a kernel for X and Y.

Putting together morphological associative memories are able to recall general distorted inputs in the following manner:

.

Work Performed

Experiment

The ultimate goal of the experiment was to create a binary auto-associative

memory to recall patterns of capital alphabetic letters (i.e. A, B, … , Z) in either

Microsoft San Serif font or Math5 font (see Fig. 1 for illustration of fonts). The

alphabetic patterns were represented as a bitmap using 50 x 50 black and white pixels. A

morphological associative memory was implemented, tested, and compared to the

popular discrete Hopfield network with synchronous updating, which served as the

baseline model. Both models were trained on various sized memories consisting of 5, 10,

26, and 52 patterns. For memory sizes of 5, 10, and 26, two memories were constructed.

The first memory contained patterns strictly using Microsoft San Serif font while the

second memory contained the same patterns using Math5 font. When the memory size

was less than 26, letters were randomly selected from the alphabet. In the case with 52

stored patterns, the memory consisted of the 26 letters from both fonts. Note that this is

the only case were the memory contained letters using both fonts.

After training the models for the various memories, they were tested by applying each

trained pattern 5 times at each distortion setting of 0%, 2%, 4%, 8%, 10%, 15%, 20%,

and 25%. Distortion at X% implied that each pixel had X% probability of flipping

(i.e. or ). Correct recall was defined

as perfect recall in an attempt to eliminate the subjective or heuristic based judging of

recall performance. Lastly, the performance measure was defined as the average recall

rate per memory size given its trained input patterns at the above distortion settings. All

training, testing, and implementation were completed using Matlab 6.5.1.

5

Page 7: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

6

Page 8: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

Data Collection

Data collection for the experiment required several steps. First, a pattern set had

to be determined. As mentioned above, the pattern set was decided to be capital

alphabetic letters in either Microsoft San Serif font or Math5 font. Capital alphabet

letters were chosen for patterns because they were abundant, easy to generate, and, most

importantly, could be reduced to 50 x 50 pixel bitmaps without severe loss of image

quality. Microsoft San Serif was chosen as a font setting because it most closely

represented simplistic block letters. This was hypothesized to be one of the easiest fonts

for the models to recall because the letters are readily distinguishable by the human eye.

Math5 font was chosen for comparison and exhibited a more complicated representation

of the letter. Differences in these fonts are illustrated below:

Fig. 1

After determining the pattern set, the images had to be generated. First, a word

document consisting of the 26 letters of the alphabet was created for each font. After

generating the two documents, each was scanned using an HP ScanJet 4570c scanner.

Lastly, Adobe Photoshop 7.0 was used to crop out each letter in the scanned document

and convert them to jpeg format.

Data Setup for Associative Memory

Even though the data was collected and generated, it was still necessary to

preprocess the data before applying it to the associative memories. Preprocessing

consisted of two obstacles. First, the letter images needed to be converted to a

7

Page 9: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

representation suitable for associative memories in Matlab. This included importing the

images into Matlab, scaling them down to 50 x 50 pixel bitmaps, and converting them to

a 2500 by 1 vector. The importing and scaling were accomplished using Matlab’s image

processing toolbox while the conversion to a vector was accomplished by concatenating

each column in the bitmap. These steps can be found in the makeAlphabet.m file.

The second challenge was to be able to perturb the image to a desired distortion so

that partial trained patterns could be applied to the memories. For this, two probabilistic

distortion methods were chosen. The first method distorted each bit of the image with a

fixed probability (saltAndPepper.m) whereas the second method distorted each bit based

on its value (probSaltAndPepper.m). In other words, in the second method white pixels

and black pixels were distorted with individual probabilities. Only the first method was

used for the experiment, but the user application utilizes both methods. The effects of

distortion using the first method are illustrated below:

Fig. 2

Training the Associative Memory Models

Before testing could begin, both the Hopfield associative memory and the MNN

associative memory needed to be trained. A 3.0 GHz Pentium 4 processor with 1 GB

memory was used for training. The Hopfield model was iteratively trained using J.J.

Hopfield’s proposed learning algorithm [1]. Note that the Hopfield model uses values in

the set {-1, 1} instead of {0, 1}, so it was necessary to first convert the memories to this

format. Training this memory resulted in a single 2500 x 2500 memory, and took

approximately 20 minutes for the tested memory sizes (see HOP_Training.m).

Training for the MNN associative memory was more involved. First, the MNN

model required the construction of a kernel matrix for each memory. The kernel matrix

8

Page 10: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

is essentially a reduction of the memory so that each image is mapped to a unique subset

of itself. This is illustrated below with the letters A and B.

Fig. 3 Fig. 4

During the research phase, no algorithm for developing a general kernel was listed in

the literature. To bypass this obstacle, a randomizing algorithm was implemented to

satisfy Ritter and Sussner’s Theorem mentioned above (see getKernel.m). It is worth

noting that a genetic algorithm was also considered for this task, but the preliminary

implementations were inefficient compared to the randomizing algorithm because

generating a gene pool of 2500 x 2500 matrices demanded a large computational cost. A

more suitable representation may exist for a generic algorithm to complete this task.

Using the determined kernels, the MNN model was trained, resulting in memories

and . Both of these were of size 2500 x 2500. Also, training for both and took

approximately 10 minutes for the tested memory sizes (see

MNN_Training.m).

Testing the Associative Memory Models

Once the models were trained for all memory sizes, testing began. As mentioned

above, for each memory size, each trained pattern was presented 5 times at each

distortion setting of 0%, 2%, 4%, 8%, 10%, 15%, 20%, and 25%. The Hopfield

associative memory was allowed a maximum of 1000 iterations for convergence given an

input pattern (HopAA.m). On the other hand, only one step was necessary for the MNN

model using the following method:

.

9

Page 11: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

Model performance was measured by the average recall rate per memory size given the

above applied patterns, where recall was correct if and only if it was perfect. Lastly, after

recording the results of the 5 trials for each input, the average recall rate was computed

for each distortion setting.

User Application

A user application, RunAM.m, was developed to test any of the following claims

(read the Readme.txt before running the program).

Results

** Complete results can be found in the Appendix. The below illustrations graphically summarize the results, and the following abbreviations are used:

MNN means Morphological Neural NetworkHOP means Hopfield Neural NetworkMSS means Microsoft San Serif FontM5 means Math5 Font

Below is a graphical description of the effects of image distortion on average

recall rate for a memory consisting of 5 images.

10

Page 12: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

Fig. 5

Below is a graphical description of the effects of image distortion on average

recall rate for a memory consisting of 10 images.

Fig. 6

Below is a graphical description of the effects of image distortion on average

recall rate for a memory consisting of 26 images.

11

Page 13: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

Fig. 7

12

Page 14: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

Below is a graphical description of the effects of image distortion on average

recall rate for a memory consisting of 52 images.

Fig. 8

Below is a graphical description of the effects of memory size (5, 10, and 26) and

letter font on average recall rate for the morphological model.

Fig. 9

13

Page 15: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

Discussion of ResultsThe above results reveal several observations. A minor observation regarding

training is that the morphological model always took at least half the time compared to

the Hopfield model. This may be the result of implementation. As for performance

observations, in either the Hopfield model or the morphological model, the results clearly

illustrate that average recall performance degrades as the memory size increases. This is

most noticeable with the Hopfield model. When a memory contained 5 images, this

model had a perfect recall rate at all tested distortion levels with images using Microsoft

San Serif font while the recall rate was between 45% and 100% for images using Math5

font. However, when the memory contained more than 5 images, it never correctly

recalled a pattern. Similarly, but far less dramatically, the morphological model’s

performance degraded with memory size.

Another observation is that both models were worse in recalling images using

Math5 font. This observation is most evident in the morphological model’s performance

illustrated in figure 9, since the Hopfield model’s performance degraded so quickly that it

can only be seen in figure 5. It is worth noting that this observation supports the initial

hypothesis that letters in Math5 font are more difficult to distinguish.

A very important observation is that at 0% distortion (i.e. the original image)

regardless of the memory size, the morphological associative memory always recalled

perfectly. This is not a surprise because the formulation of this model guarantees this

property. The implications of this are amplified by comparing it to the Hopfield network

with memories containing more than 5 images. In these cases, the Hopfield network was

scarcely able to correctly recall an image. Ultimately, given the above results, it is clear

that the associative memory using a morphological model outperformed the memory

using a Hopfield model in the areas of robustness, capacity, and training algorithm

complexity.

14

Page 16: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

Conclusion

The ability for humans to retrieve information from associated stimuli continues

to elicit great interest among researchers. Much progress has been made in pushing the

limitations of associative memories. One major milestone in this progression came with

the advent of the Hopfield network as an underlying model for associative memories.

Although this did not improve storage capacity, it did alleviate the orthogonality

constraint placed on the input by linear associative memories and also allowed for greater

input distortion. This paper has shown that progress has not stopped for associative

memories by comparing the Hopfield associative memory model to the radically different

morphological model. Specifically, the morphological associative memory has improved

multiple limitations inherent in the Hopfield model. This includes expanding the storage

capacity from in the Hopfield case to as many patterns as can be represented

(2n in the binary case) [3, 5]. In addition to this, an improvement in robustness and time

complexity in both training and testing were illustrated. Further research into

morphological memories includes constructing kernels for non-binary matrices and

increasing the memory’s robustness. Although the determined results suggest that it is

still premature to add associative memories as a mainstream soft computing technique,

their continual improvement do indicate that this may one day be the case.

15

Page 17: homepages.cae.wisc.eduhomepages.cae.wisc.edu/~ece539/project/f03/carey.doc  · Web viewUsing neural networks, associative memories are able to recall the desired information given

References

[1] Y. H. Hu. Associative Learning and Principal Component Analysis. Lecture 6 Notes, 2003

[2] R. P. Lippmann. An Introduction to Computing with Neural Nets. IEEE Transactions of Acoustics, Speech, and Signal Processing, ASSP-4:4-22, 1987.

[3] R. McEliece and et. al. The Capacity of Hopfield Associative Memory. Transactions of Information Theory, 1:33-45, 1987.

[4] G. X. Ritter and P. Sussner. An Introduction to Morphological Neural Networks. In Proceedings of the 13th International Conference on Pattern Recognition, pages 709-711, Vienna, Austria, 1996.

[5] G. X. Ritter, P. Sussner, and J. L. Diaz de Leon. Morphological Associative Memories. IEEE Transactions on Neural Networks, 9(2): 281-293, March 1998.

16