17
Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474.

Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

Embed Size (px)

Citation preview

Page 1: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

Topological Neural Networks

Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474.

Page 2: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

Self-Organizing Maps• SOMs = Competitive Networks where:

– 1 input and 1 output layer.

– All input nodes feed into all output nodes.

– Output layer nodes are NOT a clique. Each node has a few neighbors.

– On each training input, all output nodes that are within a topological distance, dT, of D from the winner node will have their incoming weights modified.

• dT(yi,yj) = # nodes that must be traversed in the output layer in moving between output

nodes yi and yj. – D is typically decreased as training proceeds.

Fully Interconnected

Input Output

PartiallyIntraconnected

Page 3: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

There Goes The Neighborhood

D = 1

D = 2

D = 3

• As the training period progresses, gradually decrease D.• Over time, islands form in which the center represents the centroid C of a set of input vectors, S, while nearby neighbors represent slight variations on C and more distant neighbors are major variations.• These neighbors may only win on a few (or no) input vectors, while the island center will win on many of the elements of S.

Page 4: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

Self Organization

• In the beginning, the Euclidian distance dE(yl,yk) and Topological distance dT(yl,yk) between output nodes yl and yk will not be related.

• But during the course of training, they will become positively correlated: Neighbor nodes in the topology will have similar weight vectors, and topologically distant nodes will have very different weight vectors.

n

ikiliklE wwyyd

1

2),(

Euclidean Neighbor

Emergent Structure of Output Layer

Before After

Topological Neighbor

Page 5: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

Self-Organized Maps for Robot Navigation

Owen & Nehmzow (1998)

Task: Autonomous robot navigation in a laboratory

Goals:

1. Find a useful internal representation (i.e. map) that supports an intelligent choice of actions for the given sensory inputs

2. Let the robot build/learn the map itself

- Saves the user from specifying it.

- Allows the robot to handle new environments.

- By learning the map in a noisy, real-world situation, the robot

will be more apt to handle other noisy environments.

Approach:

• Use an SOM to organize situation-action vectors.

• The emerging structure of the SOM then constitutes the robot’s functional internal representation of both the outside world and the appropriate actions to take in different regions of that world.

Page 6: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

The Training Phase

R

1. Record Sensory Info

“Turn Right & Slow Down

2. Get correct actions

3. Input Vector = Sensory Inputs & Actions

Input

Output

4. Run SOM on Input Vector 5. Update Winner & Neighbors

Page 7: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

The Testing Phase

R

1. Record Sensory Info 2. Input Vector = Sensory Inputs & No Actions

Input

Output

3. Run SOM on Input Vector 4. Read Recommended Actions from the Winner’s Weight Vector

A

Page 8: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

Clustering of Perceptual Signatures

• The general closeness of successive winners shows a correlation between points & distances in the objective world and the robot’s functional view of that world.

• Note: A trace of the robot’s path on a map of the real world (i.e. lab floor) would have ONLY short moves.

The sequence of winner nodes during the testing phase of a typical navigation task.

Page 9: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

SOM for Navigation Summary

• SOM Regions = Perceptual Landmarks = Sets of similar perceptual patterns

• Navigation = Association of actions with perceptual landmarks

• Behavior is controlled by the robot’s subjective functional interpretation of the world, which may abstract the world into a few key perceptual categories.

• No extensive objective map of the entire environment is required.

• Useful maps are user & task centered.

• Robustness (Fault Tolerance): The robot also navigates successfully when a few of its sensors are damaged => The SOM has generalized from the specific training instances.

• Similar neuronal organizations, with correlations between points in the visual field and neurons in a brain region, are found in many animals.

Page 10: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

Brain Maps

Page 11: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

Tonotopic Maps in the Auditory System

SpiralGanglion

VentralCochlear Nucleus

SuperiorOlive

Inferior Colliculus

MGN

AuditoryCortex

Cochlea (Inner Ear)

20 kHz

4 kHz

1 kHz

10 kHz

CochleaSp. Gang.

Cochlear Nucleus

Source LocalizationVia Delay Lines

20 kHz

4 kHz

1 kHz

10 kHz

Tonotopy preserved throughall 7 levels of processing

Page 12: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

Source Localization using Delay Lines

Source

Location Detection Neurons: Need 2 simultaneous inputs to fire

Transformer Neurons: Convert sound frequency into a neural firingpattern, which is phase-locked to the sound waves (although lower freq).

Right Ear Left Ear

Left90o

Left45o

Right90o

Right45o

StraightAhead! Owls have different

ear heights, so theycan use the same mechanism for horizontaland vertical localization

Topological Map = Similar dirsdetected by neighboring cells.

Page 13: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

Occular Dominance & Orientation Columns

Right eye Left eyeRight eyeLeft eye

Layers 5 & 6 of V1

Neu

ral R

espo

nse

(Fir

ing

rate

)

0o 90o-90o

2 Topological Maps• Cells respond to lines at particular angles,and nearby cells respond to similar angles.• Regions of cells respond to the same eye.

Retina

LGN

Orientation Angle

Page 14: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

Self-Organizing Maps of Orientation Columns

Training Patterns

Retina

Visual Cortex • Each VC cell gets input fromall retinal cells.• Initially, all weights random.• Each pattern is presented, andthe ”winning” VC cell getsto change its weights to bettermatch the input.• Nearby cells in a slowly-shrinking neighborhood alsoupdate their weights.

Page 15: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

Emerging Orientation Preferences

• Many VC cells show a strong preference for a particular orientation.

• Neighboring cells show similar preferences.

• Gradients of preferred orientations often form along vertical, horizontal and diagonal axes.

Page 16: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

Biological Kohonen Maps

• The orientation columns clearly emerge from a Kohonen algorithm of weight-change spreading in a slowly-shrinking neighborhood.

• But this lacks biological realism.• So classic Kohonen maps cannot explain the formation of orientation columns.• However, the basic neurophysiological mechanisms of late-stage long-term

potentiation (late LTP) can be used in a modified Kohonen map to produce similar orientation-columns.

• Key idea: When a pre-synaptic (retinal) node R and post-synaptic (VC) node V both fire, then:

– the RV weight should be increased, – R should produce more axons, some of which will spread to OTHER post-

synaptic nodes, V2,V3..in the general neighborhood of V.• So each training pattern will produce a winner node, V, and portions of the input

weights to V will be randomly distributed to the input weights of some neighboring VC nodes: Weight(R,V) will affect Weight(R,V2), Weight(R,V3), etc.

• Extent of random neighborhoods gradually decreases (just as the degree of plasticity decreases during biological development)

Page 17: Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474

Winner VC node

R

Random neighbor

Retina

Visual Cortex