Upload
others
View
11
Download
0
Embed Size (px)
Citation preview
06.03.11 J. Fletcher 1.2
Essential References
1. Alan Oppenheim and Ronald Schaefer with John Buck. Discrete-Time
Signal Processing. Prentice Hall, 1999. Second Edition.
2. Bernard Widrow & Samuel Stearns. Adaptive Signal Processing.
Prentice Hall, 1985.
3. Sen Kuo and Dennis Morgan. Active Noise Control Systems:
Algorithms and DSP Implementations. Wiley Interscience, 1996.
4. Keshab Parhi. VLSI Digital Signal Processing Systems: Design and
Implementation. Wiley Interscience, 1999.
5. Erwin Kreyszig. Advanced Engineering Mathematics. John Wiley &
Sons, 1993. Seventh edition.
6. Ferrel Stremler. Introduction to Communication Systems. Addison-
Wesley, 1990. Third edition.
06.03.11 J. Fletcher 1.3
Contents
� Introduction to Adaptive Systems
� Adaptive Filter Theory
� Least Mean Square (LMS) Algorithm
� Acoustic Considerations
� Master’s Projects
1 Section I
Introduction to Adaptive Systems
06.03.11 J. Fletcher 1.5
Adaptive Filters Everywhere
� Wireless - Equalization
� Networking Echo Cancellation
� System Identification
� Acoustics
� Speech Processing
06.03.11 J. Fletcher 1.6
Characteristics
� Time-variant system
� Ingredients
•Filter
� Input Vector x[n]
� Weight Vector(s) wk[n], k=0,1,2,…,L
� Output Vector y[n]
•“Update” algorithm
� wk[n-1] → wk[n], k=0,1,2,…,L
06.03.11 J. Fletcher 1.7
Simple Discrete Time System
� y =∑n=0L wlnxn-l
� y = WWWWTXXXX
� Linear
� Time Invariant
� Causal
06.03.11 J. Fletcher 1.8
Adaptive System Using Feedback
� y =∑n=0L wln(t)xn-l
� y = WWWWT(t)XXXX
� Time Variant
� Non-linear
� Causal wk
x
d
e+y
2 Section T
Adaptive Filter Theory
06.03.11 J. Fletcher 1.10
1950’s – Adaptive Antennas
� Birth of adaptive filters (Howells and Applebaum)
� Antenna applications are endless
• Reduce directional interference (sidelobe cancellation)
• Self coherence (directional antenna
aligns itself to the incoming signal)
• Isolate weak signals amongst more
powerful interferers
• Apply discretion to signals from
the source velocity
(high/low speed discretion)-+
y
JamPrimary
06.03.11 J. Fletcher 1.11
Widrow and Stearns
� Stanford Labs 1965
� Desire to reduce 60 Hz tones in heart rate
monitors
� Developed a more simple gradient search
algorithm
� Least mean square or LMS
06.03.11 J. Fletcher 1.12
Feedforward System
� Signaling can be expressed in discrete or continuous variables, [n] or (t).• x := Input
• y := Filter Output
• d := Desired Output
• e := Error
wk
update
x
d
e+y
06.03.11 J. Fletcher 1.13
Mean Square Error
� Mean Square Error (aka second moment)
• MSE = ξ � E[e2]
� Filter Output
• y=x*wn=XXXXnTWWWWn
� Error Output
• e=d+y=d+XXXXnTWWWWn
• e2=d2+2dXXXXnTWWWWn+ XXXXn
TWWWWn WWWWnTXXXXn
• E[e2]=E[d2]+2⋅E[d⋅XXXXnTWWWWn]+E[XXXXn
TWWWWn WWWWnTXXXXn]
wk
update
x
d
e+y
06.03.11 J. Fletcher 1.14
Mean Square Error
� Define auto- and cross-correlations
• AAAA=E[XXXXnTXXXXn]
• CCCC=E[d⋅XXXXn]
• ξ=E[d2]+2⋅CCCCT WWWWn+ WWWWnTAWAWAWAWn
� W’ is the weight vector that minimizes the mean squareerror.
� ξ → ξmin where ∇ (ξ) = 0• ∇ (ξ) = δξ/δWWWW
• ∇ (ξ) = 2 (AWAWAWAW-CCCC)
• 2 (AWAWAWAW’-CCCC)=0
• ∴ AWAWAWAW’=CCCC and WWWW’=AAAA-1CCCC
wk
update
x
d
e+y
06.03.11 J. Fletcher 1.15
Performance Surface Exploration
� Example Performance Surface [2]� Filter element is 2-tap FIR structure with weights wk.
� Desired signal is a shifted and amplified version of
the input.
� N - samples per input period
06.03.11 J. Fletcher 1.16
Performance Surface Exploration
� To determine ξ, we first find AAAA and CCCC, the
correlation matrices.
06.03.11 J. Fletcher 1.17
Performance Surface Exploration
� In matrix form, AAAA and C C C C can be expressed as
06.03.11 J. Fletcher 1.18
Performance Surface Exploration
� ξ can be formulated as a function of the two
weights, w0 and w1, given the correlation
matrices
� The quadratic nature of the performance
surface has a clear minimum, depicting the
set of weights that result in minimum error.
06.03.11 J. Fletcher 1.19
Performance Surface Explorationfor each w
0
for each w1
xi=…
next
next
-20
-10
0
10
20
-20
-10
0
10
20
0
100
200
300
400
500
600
w0
w1
ξ
06.03.11 J. Fletcher 1.20
The Performance Surface
� Mathematically, it’s desirable to simply
determine W’
� From a control theory point of view, it is
desirable for the system to slowly traverse
the performance function
� Different methods of traversing the
performance surface employ numerical
analysis techniques
06.03.11 J. Fletcher 1.21
Newton’s Method
1 1.5 2 2.5 3 3.5 4 4.5
0
1
2
3
4
5
6
7
8
9
06.03.11 J. Fletcher 1.22
Newton’s Method
� Application to our performance surface
stems from the definition of the optimum
weight vector and the gradient of ξ
� Rearranging and combining these
06.03.11 J. Fletcher 1.23
Steepest Descent
� Newton’s method is geared to reach the
minimum in just a few steps.
� The desire is to gracefully traverse the
performance surface in a controlled manner.
� Steepest descent is defined as moving in the
direction of the gradient, without regards to
AAAA-1 as in Newton’s method.
06.03.11 J. Fletcher 1.24
µ – The Step Size
� The step size is added into the picture to
slow the adaptation process.
� Regulating µ can increase the “seek” time,
and decrease the stability.
� Algorithms for regulating µ “on-the-fly” are
common
06.03.11 J. Fletcher 1.25
Steepest Descent
-20-15
-10-5
05
1015
20
-20
-10
0
10
20
0
200
400
600
w0
w1
ξ
06.03.11 J. Fletcher 1.26
Least Mean Squares
� Updating the weights is expensive
� Let’s neglect expected value, and look at the
gradient of e2 instead of ξ.
� Borrowing from the steepest descent
� Elegant!
3 Section A
Acoustic Compensation
Considerations
06.03.11 J. Fletcher 1.28
Acoustic Considerations
� Enhancements to adaptive algorithms in the
interest of acoustic cancellation
� USPTO #2043416
According to the present invention the sound
oscillations, which are to be silenced are taken
in by a receiver and reproduced by a
reproducing apparatus in the form of sounds
having an opposite phase.–Paul Leug, 1936
06.03.11 J. Fletcher 1.29
Electronic Sound Absorber
� 1953, Olson and May
06.03.11 J. Fletcher 1.30
Feedforward
� May also be used for offline modeling
06.03.11 J. Fletcher 1.31
Secondary Source Transfer Function
06.03.11 J. Fletcher 1.32
Filtered-X
06.03.11 J. Fletcher 1.33
2nd Source Feedback
06.03.11 J. Fletcher 1.34
2nd Source Feedback Compensation [3]
D
wk
leaky
x d e+
y
P
H
C
F
+
+
4 Section P
Master’s Project: Integrated Noise Cancellation
with the Least Mean Square Algorithm and the
Logarithmic Number System
06.03.11 J. Fletcher 1.36
Integrated Noise Cancellation
� Integration of noise cancellation hardware
into portable devices
� Higher level of integration → Cheaper
� LNS + Acoustics = ☺
06.03.11 J. Fletcher 1.37
Deliverable Choices
� EDP Groundwork• Why is the project important?
• Will it make money?
• How much battery life?
• Performance Specifications
� System Model• Finite precision Analysis
• Performance measures
� VLSI Building Blocks
� Additional Filter Modifications
06.03.11 J. Fletcher 1.38
Filter Architecture
06.03.11 J. Fletcher 1.39
Building Blocks
� Logarithmic Multiplier (AL+BL)
� Logarithmic Add Operation
•Inverse log lookup ROM design
06.03.11 J. Fletcher 1.40
LMS Convergence
06.03.11 J. Fletcher 1.41
Pipelining LMS (Parhi)
06.03.11 J. Fletcher 1.42
Master’s Project Tips
� Do
• Decide how much
Breadth vs Depth
• Come up with a clear
deliverable list
• Work in the lab
• Have Fun ☺
� Don’t
• Sprawl
• Deviate
• Underestimate
The End
4 Section X
Extraordinary Adaptive Architectures
06.03.11 J. Fletcher 1.45
IIR Implementation
� Primary filter can be an IIR implementation
� The starting point is the same
06.03.11 J. Fletcher 1.46
IIR Implementation [2]
� Specially crafted vectors simplify the IIR case
� From the definition of the LMS gradient earlier
06.03.11 J. Fletcher 1.47
IIR Implementation
� define alpha and beta
� define grad in terms of alpha/beta
� Explain the diagonal step size, MMMM
� convergence?