15
CAP6938 Neuroevolution and Artificial Embryogeny Basic Concepts Dr. Kenneth Stanley January 11, 2006

CAP6938 Neuroevolution and Artificial Embryogeny Basic Concepts

Embed Size (px)

DESCRIPTION

CAP6938 Neuroevolution and Artificial Embryogeny Basic Concepts. Dr. Kenneth Stanley January 11, 2006. We Care About Evolving Complexity So Why Neural Networks?. Historical origin of ideas in evolving complexity Representative of a broad class of structures - PowerPoint PPT Presentation

Citation preview

Page 1: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

CAP6938Neuroevolution and Artificial Embryogeny

Basic Concepts

Dr. Kenneth Stanley

January 11, 2006

Page 2: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

We Care About Evolving ComplexitySo Why Neural Networks?

• Historical origin of ideas in evolving complexity• Representative of a broad class of structures• Illustrative of general challenges• Clear beneficiary of high complexity

Page 3: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

How Do NNs Work?

Input

Output

Input

Output

Page 4: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

How do NNs Work?Example

Inputs (Sensors)

Outputs (effectors/controls)

Front Left Right Back

Forward Left Right

Page 5: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

What Exactly Happens Inside the Network?

• Network Activation

X1 X2

H1 H2

Neuron j activation:

n

iijij wxH

1

out1 out2

w11

w21w12

w22

Page 6: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

• Recurrent connections are backward connections in the network

• They allow feedback• Recurrence is a type of

memory

Recurrent Connections

X1 X2

H

out

w21w11

wH-outWout-H

Recurrent connection

Page 7: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

Activating Networks of Arbitrary Topology

• Standard method makes no distinction between feedforward and recurrent connections:

• The network is then usually activated once per time tick

• The number of activations per tick can be

thought of as the speed of thought• Thinking fast is expensive

n

iijtitjj wxHH

1)1()(,

X

1

X

2

H

out

w21w11

wH-out

Wout-H

Page 8: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

Arbitrary Topology Activation Controversy

• The standard method is not necessarily the best• It allows “delay-line” memory and a very simple

activation algorithm with no special case for recurrence

• However, “all-at-once” activation utilizes the entire net in each tick with no extra cost

• This issue is unsettled

Page 9: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

The Big Questions

• What is the topology that works?

• What are the weights that work?

? ??

??

??

??

??

??

Page 10: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

Problem Dimensionality

• Each connection (weight) in the network is a dimension in a search space

• The space you’re in matters: Optimization is not the only issue!

• Topology defines the space

21-dimensional space 3-dimensional space

Page 11: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

High Dimensional Space is Hard to Search

• 3 dimensional – easy

• 100 dimensional – need a good optimization method

• 10,000 dimensional – very hard

• 1,000,000 dimensional – very very hard

• 100,000,000,000,000 dim. – forget it

Page 12: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

Bad News

• Most interesting solutions are high-D:– Robotic Maid– World Champion Go Player– Autonomous Automobile– Human-level AI– Great Composer

• We need to get into high-D space

Page 13: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

A Solution (preview)

• Complexification: Instead of searching directly in the space of the solution, start in a smaller, related space, and build up to the solution

• Complexification is inherent in vast examples of social and biological progress

Page 14: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

So how do computers optimize those weights anyway?

• Depends on the type of problem– Supervised: Learn from input/output examples– Reinforcement Learning: Sparse feedback– Self-Organization: No teacher

• In general, the more feedback you get, the easier the learning problem

• Humans learn language without supervision

Page 15: CAP6938 Neuroevolution and  Artificial Embryogeny Basic Concepts

Significant Weight Optimization Techniques

• Backpropagation: Change weights based on their contibution to error

• Hebbian learning: Changes weights based on firing correlations between connected neurons

Homework:-Fausett pp. 39-80 (in Chapter 2)-and Fausett pp. 289-316 (in Chapter 6)-Online intro chaper on RL-Optional RL survery