Upload
lamhanh
View
237
Download
2
Embed Size (px)
Citation preview
August 9 - 12, 2004 Intro-1
Introduction to Neural Networksfor Senior Design
August 9 - 12, 2004 Intro-2
Neural Networks: The Big PictureArtificial
Intelligence
Machine LearningNeural
Networks
not rule-oriented
rule-oriented
Expert Systems
August 9 - 12, 2004 Intro-3
Types of Neural Networks
ArchitectureRecurrent Feedforward
Supervised Learning
No Feedback,
Training Data Available Learning
Rule
Unsupervised Learning
August 9 - 12, 2004 Intro-4
What Is a Neural Network?
(Artificial) neural network, or (A)NN:
Information processing system loosely based on the model of biological neural networks
Implemented in software or electronic circuits
Defining propertiesConsists of simple building blocks (neurons)
Connectivity determines functionalityMust be able to learnMust be able to generalize
August 9 - 12, 2004 Intro-5
Biological Inspiration for Neural NetworksHuman Brain: ≈ 1011 neurons (or nerve cells)
Dendrites: incoming extensions, carry signals inAxons: outgoing extensions, carry signals outSynapse: connection between 2 neurons
Learning:Forming new connections between the neurons, orModifying existing connections
Neuron 1 Neuron 2
axon dendrite
synapse
August 9 - 12, 2004 Intro-6
From Biology to the Artificial Neuron, 1
Neuron 1 Neuron 2
axon dendrite
synapse
Σ, f2Σ, f1w2,1
w2,k
Neuron 1 Neuron 2
w2,n
b
August 9 - 12, 2004 Intro-7
From Biology to the Artificial Neuron, 2
Σ, f2Σ, f1w2,1
w2,k
w2,n
Neuron 2Neuron 1
b
• The weight w models the synapse between two biological neurons.
• Each neuron has a threshold that must be met to activate the neuron, causing it to “fire.” The threshold is modeled with the transfer function, f.
• Neurons can be
• excitatory, causing other neurons to fire when they are stimulated; or
• inhibitory, preventing other neurons from firing when they are stimulated.
August 9 - 12, 2004 Intro-8
Applications of Neural Networks
Aerospace: aircraft autopilots, flight path simulations, aircraft control systems, autopilot enhancements, aircraft component simulations
Banking: credit application evaluators
Defense: guidance and control, target detection and tracking, object discrimination, sonar, radar and image signal processing including data compression, feature extraction and noise suppression, signal/image identification
Financial: real estate appraisal, loan advisor, mortgage screening, stock market analysis, stock trading advisory systems
Manufacturing: process control, process and machine diagnosis, visual quality inspection systems, computer chip quality analysis
August 9 - 12, 2004 Intro-9
Applications of Neural Networks, cont.’
Medical: cancer cell detection and analysis, EEG and ECG analysis, disease pathway analysis
Communications: adaptive echo cancellation, image and data compression, speech synthesis, signal filtering
Robotics: Trajectory control, manipulator controllers, vision systems
Pattern Recognition: character recognition, speech recognition, voice recognition, facial recognition
August 9 - 12, 2004 Intro-10
Problems Suitable for Solution by NN’s
Problems for which there is no clear-cut rule or algorithm
e.g., image interpretation
Problems that humans do better than traditional computers
e.g., facial recognition
Problems that are intractable due to sizee.g., weather forecasting
August 9 - 12, 2004 Intro-11
Math Notation/Conventions
Scalars: small italic letterse.g., p, a, w
Vectors: small, bold, non-italic letterse.g., p, a, w
Matrices: capital, bold, non-italic letterse.g., P, A, W
Assume vectors are column vectors (unless stated otherwise)
e.g., ⎥⎦⎤
⎢⎣⎡= 21p
August 9 - 12, 2004 Intro-12
Network Architecture and NotationSingle-Input Neuron
Network Parameters (weight: w, bias: b)Adjusted via learning rule
Net Input: nTransfer Function, f (design choice)
w
ScalarInput
p Σ
Neuron
1
n f
b
a ScalarOutput
a = f(n) = f(wp + b)
August 9 - 12, 2004 Intro-13
Transfer Functions – Hard Limitera
00
1
n
a = f(n) = 0, n < 01, n ≥ 0
MATLAB: a = hardlim(n)
(often used for binary classification problems)
August 9 - 12, 2004 Intro-14
Transfer Functions – Symmetric Hard Limiter
a
-10
1
n
a = f(n) = -1, n < 01, n ≥ 0
MATLAB: a = hardlims(n)
(often used for binary classification problems)
August 9 - 12, 2004 Intro-15
Transfer Functions - Linear
a
n0
slope: 1
y-intercept: 0
a = f(n) = n
MATLAB: a = purelin(n)
(often used in network training for classification problems)
August 9 - 12, 2004 Intro-16
Transfer Functions – Log-Sigmoid
a
00.20.40.60.8
1
-4 -2 0 2 4
n
n−+ e11
a = f(n) =
MATLAB: a = logsig(n)
(often used for training multi-layer networks with backpropagation)
August 9 - 12, 2004 Intro-17
Transfer Function Summary
Function Equation Output Range MATLAB
Hard limiter a = 0, n < 01, n ≥ 0
Discrete: 0 or 1 hardlim
Symmetric Hard Limiter
a = -1, n < 01, n ≥ 0
Discrete: 1, -1 hardlims
Linear a = n Continuous: range of n purelin
Log-Sigmoid Continuous: (0, 1) logsig
Hyperbolic Tangent Sigmoid
Continuous: (-1, 1) tansig
Competitive a = 1, neuron w/ max n(0 else)
Discrete: 0, 1 compet
na −+=
e11
( )( )nn
nna
−
−
+
−=
eeee
August 9 - 12, 2004 Intro-18
Multiple-Input NeuronsConsider a network with R scalar inputs: p1, p2, …, pR :
where W is the weight matrix with one row: W = [w1,1 w1,2 … w1,R]
and p is the column vector:
w1,1
ScalarInputs
p1p2...pR
Σ
1
nf
b
a
Multiple-Input Neuron
ScalarOutput
w1,2
w1,R
p = ⎥⎥
⎦
⎤
⎢⎢
⎣
⎡
Rp
pM1
a = f(n) = f(Wp + b)
August 9 - 12, 2004 Intro-19
Multiple-Input Neurons: Matrix Form
+
W
b1
Input
1 x R
1 x 1
pR x 1
1 x 1
n1 x 1
af
Multiple-Input Neuron
Ra = f(n) = f(Wp + b)
August 9 - 12, 2004 Intro-20
Binary Classification Example [Ref. #5]
Consider the patterns:
Features: # of vertices*, # of holes, symmetry (vertical or horizontal)
Noise-free observation vectors or feature vectors
yes(1) or no (0)
⎥⎥
⎦
⎤
⎢⎢
⎣
⎡⇔
⎥⎥
⎦
⎤
⎢⎢
⎣
⎡⇔
002
114
FA ,
* Here a vertex is an intersection of 2 or more lines.
August 9 - 12, 2004 Intro-21
Binary Classification Example, continued
Targets (desired outputs for each input):
t1 = -1 t2 = 1(for “A”) (for “F”)
Neural Network (Given):
.42
-1.3
-1.3
p1
p2
p3
Σ a = hardlims(Wp)f = hardlims
W = [.42 -1.3 -1.3]
August 9 - 12, 2004 Intro-22
Binary Classification: Sample Calculationp1
p2
p3
f =hardlimsΣa = hardlims(Wp)
.42
-1.3
-1.3W = [.42 -1.3 -1.3]
Calculating the neuron output if the input pattern is the letter “A”:
Wp = [.42 -1.3 -1.3] = -.92
a = hardlims(Wp) = -1
Find Wp and the neuron output a if the input pattern is “F”?
⎥⎥
⎦
⎤
⎢⎢
⎣
⎡
114
August 9 - 12, 2004 Intro-23
A Layer of 2 Neurons (Single-Layer Network)
Sometimes we need more than one neuron to solve a problem.
Example consider a problem with 3 inputs and 2 neurons:
w1,1
w2,1p1
p2
p3
Σ
Σ
w1,2
w2,2
w1,3
w2,3
1
1
b1
b2
n1
n2
f
f
a1
a2
W = ⎥⎦
⎤⎢⎣
⎡322212
312111
,,,,,,
wwwwww
p =
a =
⎥⎥
⎦
⎤
⎢⎢
⎣
⎡
321
ppp
⎥⎦⎤
⎢⎣⎡
+⋅+⋅=⎥⎦
⎤⎢⎣⎡
))row(:f())row(:f(
21
21
21
bb
aa
pWpW
)f( bWp +=
August 9 - 12, 2004 Intro-24
Weight Matrix Notation
Recall for our single neuron with multiple inputs, we used weight matrix W with one row: W = [w1,1 w1,2 … w1,R]
General Case (multiple neurons): components of W are weights connecting some input element to the summer of some neuron
Convention (as used in Hagan), for component wi,j of WFirst index (i) indicates the neuron # the input is entering
the “to” indexSecond index (j) indicates the element # of input vector p that will be entering the neuron
the “from” index”
wi,j = wto,from
August 9 - 12, 2004 Intro-25
Single Layer of S Neurons: Matrix Form
Layer of S Multiple-Input Neurons
+
W
b1
Input
S x R
S x 1
pR x 1
S x 1
nS x 1
af
S
Ra = f(Wp + b)
August 9 - 12, 2004 Intro-26
Multiple Layers of Neurons
Allow each layer to have its own weight matrix (W), bias vector (b), net input vector (n), output vector (a), and # of neurons (S)
Notation: superscript on the variable name indicates the layer #:e.g., W1: weight matrix for layer #1, b2: indicates bias vector for layer #2, a3: output vector for layer #3e.g., S4: # of neurons in the 4th layer
Output of layer 1 is input to layer 2, etc.
The last (right-most) layer of the network is called the output layer; the inputs are not counted as a layer at all (per Hagan); layersbetween the input and output are called hidden layers.
August 9 - 12, 2004 Intro-27
2-Layer Network: Matrix Form
Layer 1(Hidden)
Layer 2(Output)
1
Inputp
R x 1+
W1
b1
S1xR
S1x1
S1x1
n1
S1x1
a1
R
a1 = f1(W1p + b1)
S1
f1
+
W2
b2
S2xS1
S2x1
S2x1
n2
S2x1
a2
S2
f2
1
a2 = f2(W2a1 + b2)
August 9 - 12, 2004 Intro-28
Introduction: Practice Problem
1) For the neural network shown, find the weight matrix W and the bias vector b.
2) Find the output if f = “compet” and the input vector is p = .
p1
p2
Σ
Σ
1
1
2
-2
n1
n2
f
f
a1
a2
6
3
5
2
⎥⎦⎤
⎢⎣⎡=⎥⎦
⎤⎢⎣⎡
21
21
pp
a = compet(Wp + b)
where compet(n) = 1, neuron w/max n0, else
August 9 - 12, 2004 Intro-29
Project Description
Is that a tank or a tree?
Computer -Neural Network
August 9 - 12, 2004 Intro-32
Project Overview
Image Capture Software (Image
Acquisition Toolbox –MATLAB)
Neural NetworkData Processing
to obtain NN Inputs
Tilt/Pan Servos
Servo Controller
Computer Interface to
Servo Controller
Movement direction for
camera
RC Tank/platform/
clutter
A
VideoCamera
Camera to Computer Interface
Image from Camera
B
These components may be combined in one or more physical units
Phase 1: How do we get from A to B?Phase 2 (main project): How to we get from B to A?