Upload
others
View
7
Download
2
Embed Size (px)
Citation preview
Recursive Least-Squares Adaptive Filters
Dr. Yogananda Isukapalli
2
– Introduction– Least-Squares problem– Derivation of RLS algorithm
-- The Matrix Inversion Problem – Convergence analysis of the RLS algorithm– Application of RLS Algorithm
--Adaptive Equalization
Contents
3
Introduction
• Linear least-squares problem was probably first developed and solved by Gauss (1795) in his work on mechanics
• L-S solutions have attractive properties;– can be explicitly evaluated in closed forms– can be recursively updated as more input data is
made available– are maximum likelihood estimators in the
presence of Gaussian measurement noise
4
• Over the last several years, a wide variety ofadaptive algorithms based on least squarescriterion has been derived– RLS (Recursive Least Squares) algorithms and
corresponding fast versions• FTF (Fast Transversal Filter)• FAEST (Fast Aposteriori Error Sequential
Technique)– QR and inverse QR algorithms– LSL (Least Squares Lattice) and QR
decomposition
5
Least-Squares Problem
Consider a standard observation model in additive noise.
)()( ii Hi nWUd +=
d(i)…noisy measurement linearly related to WW…Is the unknown vector to be estimatedUi…Given column vectorn(i)…the noise vector
In a practical scenario, the W can be the weight vector, UiThe data vector, d(i) the observed output using a Series of Arrays and n(i) is the noise vector added at each sensor.
6
If we have N+1 measurements then they can be grouped together into a single matrix expression;
nUWd
n
nn
W
u
uu
d
dd
+=Û
úúúúúú
û
ù
êêêêêê
ë
é
+
úúúúúú
û
ù
êêêêêê
ë
é
=
úúúúúú
û
ù
êêêêêê
ë
é
)(..
)1()0(
.
.
)(..
)1()0(
1
0
NN HN
H
H
The problem is to minimize the distance between d andWhere is the estimated weight vector.
WU ˆ
W
7
A least squares solution to the above problem is,
2
ˆˆmin WUd
W-
( ) dUUUW 1H H-=ˆ
Let Z be the cross correlation vector and Φ be the covariance matrix.
ZWUUdUZ
1ˆ
-F=
=F
=H
H
The above equation could be solved block by block basis but we are interested in recursive determination of tap weight estimates w.
8
– The RLS algorithm solves the least squares problem recursively
– At each iteration when new data sample is available the filter tap weights are updated
– This leads to savings in computations– More rapid convergence is also achieved
RLS algorithm
9
The derivation of RLS algorithmThe attempt is to find a recursive solution to the followingminimization problem,
[ ][ ] . ,)(),....(),()(
)1(),...,1(),()(
)()()()()()(
)()(
110
2
1
MisfilteroflengththennnnMiiii
inidiyidie
ien
M
T
H
n
i
in
-
=
-
=+--=
-=-=
= å
WWWWUUUU
UW
le
– The weighting factor has the property that0 << λ ≤ 1.
– weighting factor is used to “forget” data samples in distant past, usual value is 0.99.
10
11
– The optimum value for the tap-weight vector is defined by normal equations.
[ ]
)()()1()(
. ,)()()(
)()()1()(
)()()()()(
)()()(
)()()(ˆ )()(ˆ)(
*
1
*
1
1
1
1
1
ndnnn
termncorrelatiocrosstheidin
nnnn
nniin
iin
nnnnnn
n
i
in
H
Hnn
i
Hin
nn
i
Hin
UZZ
UZ
UU
UUIUU
IUU
ZwZw
+-=
=
+-F=F
++=F
+=F
F=Û=F
å
å
å
=
-
-
=
--
=
-
-
l
l
l
dlll
dll
– Then we use the matrix inversion lemma to the recursive model of correlation matrix to make it possible to invert correlation matrix recursively
12
The matrix inversion lemma– A and B are two positive-definite M-by-M matrices– D is another positive-definite N-by-M matrix– C is an M-by-N matrix.
.H11 CCDBA -- +=
Then the inverse of A is given by,
.BCBC)CBC(DBA H1H1 -- +-=
Let A,B,C and D be related as,
13
Application of matrix inversion lemma to the present Problem is based on the following definitions.
.1)(
)1()(
1
==
-==
-
DUCΦB
ΦA
nn
nl
– These definitions are substituted in the matrix inversion lemma
– After some calculations we get following equations
14
)()()()1()(1
)()1()(
) ( )1()()()1()()()(
1
1
11
1
nnnnn
nnn
equationRiccatinnnnnnn
H
H
UPUPU
UPK
PUKPPΦP
=-+
-=
---==
-
-
--
-
ll
ll
– Now we have recursive solution to the inverse of correlation matrix
– Next we need update method for the tap-weight vector
• Time update for the tap-weight vector
15
)()()()1()( )()(
)()()(ˆ
*
1
ndnnnnnnnnn
UPZPZPZΦW
--=== -
l
By substituting the Riccati equation to the first term in theright side of the equation,
(n)(n)(n)
ndnnnnnn
ndnnnnnnnnn
H
H
UPK
UPWUKWUP
ZPUKZPW
=
+---=
+-----=
fact that, theusingThen )()()()1(ˆ)()()1(ˆ
)()()( )1()1()()()1()1()(ˆ
*
*
16
The desired recursion equation for updating the tap-weight Vector.
[ ]
)()1(ˆ)( )1(ˆ)()()()()()1(ˆ
)1(ˆ)()()()1(ˆ)(ˆ
*
*
*
nnnd
nnndn
nnn
nnndnnn
H
T
H
UWWUKW
WUKWW
--=
--=
+-=
--+-=
x
x
The block diagram shown in the next page illustrates the Use of a priori estimation error in RLS algorithm.a priori estimation error is in general different froma posteriori estimation error e(n)
)()(ˆ)( )( nnndne H UW-=
17
• Block diagram of RLS algorithm
18
Summary of RLS algorithm Initialize the algorithm by setting
).1()()(1 and (n),)()1(ˆ)(ˆ
),()1(ˆ)( (n)
,)()(
)((n) ).()1()(
pute1,2,...comn time,ofinstant each timeFor
SNR lowfor constant positive large
SNRhigh for constant positive small.)0( ,0)0(ˆ
11
*
1
---=+-=
--=
+=-=
=îíì
=
==
--
-
nnn)(n(n)nnn
nnnd
nnnnnn
H
HH
H
H
PUKPPKWWUW
UKUP
IPW
llx
x
plpp
d
d
19
Convergence Analysis of the RLS algorithmThe desired response and the tap input vector are assumed to be related by the multiple linear regression model.
Block Diagram of Linear Regression Model.
20
. ).( )(
. )(. )(
. ,
),()()(
0
200
0
0
00
unityisfactorweightinglexponentiaThenregressorthet ofindependenisne
varianceandmeanzerowithwhiteisnenoisemeasurmenttheisne
vectorparameterregressiontheisWhere
nennd H
l
su
w
uw +=Assumption I:
Assumption II:The input signal vector u(n) is drawn from a stochasticProcess, which is ergodic in the autocorrelation function.
21
Assumption III:The Fluctuations in the weight-error vector must be slowerCompared with those of the input signal vector u(n).
Convergence in the Mean Value
With the help of above assumptions it can be shown that,RLS algorithm is convergent in the mean sense for n ≥ M,Where ‘M’ is the number of taps in the additive transversalfilter.
[ ] .)(ˆ 0 pwwn
nE d-=
• Unlike the LMS algorithm, the RLS algorithm does not have to wait for n to be infinitely large for convergence.
22
• The mean-squared error in the weight vectorThe weight error vector is defined as,
0)(ˆ)( wwε -= nn
Expression for the mean-squared error in the weight vector,
[ ] å=
=M
i
i
H
nnnE
1
2
01)()(l
sεε
• Ill conditioned least-squares problems may lead to poorconvergence properties.
• The estimate produced converges in the norm to theparameter vector of the multiple linear regressionmodel almost linearly with time.
)(ˆ nw0
w
23
Considerations on convergence:• RLS algorithm converges in about 2M iterations,where M is the length of transversal filter
• RLS converges typically order of magnitude fasterthan LMS algorithm
• RLS produces zero misadjustment when operatingin stationary environment (when n goes to infinityonly measurement error is affecting to the precision
• convergence is independent of the eigenvalue spread
Learning curve of the RLS algorithm
24
Example of RLS algorithm: Adaptive equalization I• Block diagram of adaptive equalizer
25
Impulse response of the channel is,
• where W controls the amount of amplitude distortion and therefore the eigenvalue spread produced by the channel.
•11 taps, forgetting factor = 1
Experiment is done in to parts• part 1: signal to noise ratio is high = 30 dB• part 2: signal to noise ratio is low = 10 dB
26
Results of Part1
27
Results of Part2
28
– Part 1 summary• Convergence of RLS algorithm is attained in about20 iterations (twice the number of taps).
• Rate of convergence of RLS algorithm is relativelyinsensitive to variations in the eigenvalue spread.
• Steady-state value of the averaged squared errorproduced by the RLS algorithm is small, confirmingthat the RLS algorithm produces zero misadjustment.
– Part 2 summary• The rate of convergence is nearly same for the LMSand RLS algorithm in noisy environment.