34
PREDICTION, PREDICTOR, PREDICTION ERROR FORWARD and BACKWARD prediction Levinson – Durbin algorithm Lattice filters New AR parametrization: Reflection coefficients SGN 21006 Advanced Signal Processing: Lecture 6 Linear Prediction Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 34

21006 Lect 6

Embed Size (px)

Citation preview

Page 1: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

SGN 21006 Advanced Signal Processing:Lecture 6

Linear Prediction

Ioan Tabus

Department of Signal ProcessingTampere University of Technology

Finland

1 / 34

Page 2: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Outline

I Dealing with three notions: PREDICTION, PREDICTOR, PREDICTION ERROR;

I FORWARD versus BACKWARD: Predicting the future versus (improper terminology) predicting the past;

I Fast computation of AR parameters: Levinson – Durbin algorithm;

I New AR parametrization: Reflection coefficients;

I Lattice filters

References : Chapter 3 from S. Haykin- Adaptive Filtering Theory - Prentice Hall, 2002.

2 / 34

Page 3: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Notations, Definitions and Terminology

I Time series:

u(1), u(2), u(3), . . . , u(n − 1), u(n), u(n + 1), . . .

I Linear prediction of order M – FORWARD PREDICTION

u(n) = w1u(n − 1) + w2u(n − 2) + . . . + wMu(n − M)

=M∑k=1

wku(n − k) = wT u(n − 1)

I Regressor vector

u(n − 1) =[

u(n − 1) u(n − 2) . . . u(n − M)]T

I Predictor vector of order M – FORWARD PREDICTOR

w =[

w1 w2 . . . wM]T

aM =[

1 −w1 −w2 . . . −wM]T

=[

aM,0 aM,1 aM,2 . . . aM,M]T

and thus aM,0 = 1, aM,1 = −w1, aM,2 = −w2, . . . , aM,M = −wM ,

I Prediction error of order M – FORWARD PREDICTION ERROR

fM (n) = u(n) − u(n) = u(n) − wT u(n − 1) = aTM

[u(n)

u(n − 1)

]= aTMu(n)

3 / 34

Page 4: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Notations, Definitions and Terminology

r(k) = E [u(n)u(n + k)] – autocorrelation function

R = E [u(n − 1)uT (n − 1)] = E

u(n − 1)u(n − 2)u(n − 3)u(n − M)

[u(n − 1) u(n − 2) u(n − 3) . . . u(n − M)

]

=

Eu(n − 1)u(n − 1) Eu(n − 1)u(n − 2) . . . Eu(n − 1)u(n − M)Eu(n − 2)u(n − 1) Eu(n − 2)u(n − 2) . . . Eu(n − 2)u(n − M)

. . . .

. . . .

. . . .Eu(n − M)u(n − 1) Eu(n − M)u(n − 2) . . . Eu(n − M)u(n − M)

=

R =

r(0) r(1) . . . r(M − 1)r(1) r(0) . . . r(M − 2). . . .. . . .. . . .

r(M − 1) r(M − 2) . . . r(0)

– autocorrelation matrix

r = E [u(n − 1)u(n)] =[

r(1) r(2) r(3) . . . r(M)]T – autocorrelation vector

rB = E [u(n − 1)u(n − M − 1)] =[

r(M) r(M − 1) r(M − 2) . . . r(1)]T

– Superscript B is the vector reversing (Backward) operator. i.e. for any vector x , we have

xB =[

x(1) x(2) x(3) . . . x(M)]B =

[x(M) x(M − 1) x(M − 2) . . . x(1)

]4 / 34

Page 5: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Optimal forward linear prediction

I Optimality criterion

J(w) = E [fM (n)]2 = E [u(n) − wT u(n − 1)]2

J(aM ) = E [fM (n)]2 = E [aTM

[u(n)

u(n − 1)

]]2

I Optimal solution:

Optimal Forward Predictor wo = R−1r

Forward Prediction Error Power PM = r(0) − rTwo

I Two derivations of optimal solution1. Transforming the criterion into a quadratic form

J(w) = E [u(n) − wT u(n − 1)]2 = E [u(n) − wT u(n − 1)][u(n) − u(n − 1)Tw ]

= E [u(n)]2 − 2E [u(n)u(n − 1)T ]w + wT E [u(n − 1)u(n − 1)T ]w

= r(0) − 2rTw + wTRw

= r(0) − rTR−1r + (w − R−1r)TR(w − R−1r) (1) 5 / 34

Page 6: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Augmented Wiener Hopf equations

I The matrix R is positive semi-definite because

xTRx = xT E [u(n)u(n)]T x = E [u(n)T x]2 ≥ 0 ∀x

and therefore the quadratic form in the right hand side of (1) (w − R−1r)TR(w − R−1r) attains

its minimum when (wo − R−1r) = 0, i.e.

wo = R−1r

For the predictor wo , the optimal criterion in (1) equals

PM = r(0) − rTR−1r = r(0) − rTwo

6 / 34

Page 7: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Augmented Wiener Hopf equations

I Derivation based on optimal Wiener filter designThe optimal predictor evaluation can be rephrased as the following Wiener filter design problem:

I find the FIR filtering process y(n) = wT u(n)I “as close as possible” to desired signal d(n) = u(n + 1), i.e.I minimizing the criterion E [d(n) − y(n)]2 = E [u(n + 1) − wT u(n)]2

Then the optimal solution is given by wo = R−1p where R = E [u(n)u(n)T ] and

p = E [d(n)u(n)] = E [u(n + 1)u(n)] = E [u(n)u(n − 1)] = r , i.e.

wo = R−1r

7 / 34

Page 8: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Augmented Wiener Hopf equationsThe optimal predictor filter solution wo and the optimal prediction error power satisfy

r(0) − rTwo = PM

Rwo − r = 0

which can be written in a block matrix equation form[r(0) rT

r R

] [1

−wo

]=

[PM0

]With the previous notations[

1−wo

]= aM[

r(0) rT

r R

]= RM+1 – Autocorrelation matrix of dimensions (M + 1) × (M + 1)

Finally, the augmented Wiener Hopf equations for optimal forward prediction error filter are

RM+1aM =

[PM0

]or

r(0) r(1) . . . r(M)r(1) r(0) . . . r(M − 1). . . .. . . .. . . .

r(M) r(M − 1) . . . r(0)

aM,0aM,1aM,2..

aM,M

=

PM00..0

Whenever RM is nonsingular, and aM,0 is set to 1, there are unique solutions aM and PM .

8 / 34

Page 9: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Optimal backward linear prediction

I Linear backward prediction of order M – BACKWARD PREDICTION

ub(n − M) = g1u(n) + g2u(n − 1) + . . . + gMu(n − M + 1)

=M∑k=1

gku(n − k + 1) = gT u(n)

where the BACKWARD PREDICTOR is

g =[

g1 g2 . . . gM]T

I Backward prediction error of order M – BACKWARD PREDICTION ERROR

bM (n) = u(n − M) − ub(n − M) = u(n − M) − gT u(n)

I Optimality criterion

Jb(g) = E [bM (n)]2 = E [u(n − M) − gT u(n)]2

9 / 34

Page 10: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Optimal backward linear prediction

I Optimal solution:

Optimal Backward Predictor go= R−1rB = wB

o

Forward Prediction Error Power PM = r(0) − (rB )T go= r(0) − rTwo

I Derivation based on optimal Wiener filter designThe optimal backward predictor evaluation can be rephrazed as the following Wiener filter design problem:

I find the FIR filtering process y(n) = gT u(n)I “as close as possible” to desired signal d(n) = u(n − M), i.e.I minimizing the criterion E [d(n) − y(n)]2 = E [u(n − M) − gT u(n)]2

Then the optimal solution is given by go= R−1p where R = E [u(n)u(n)T ] and

p = E [d(n)u(n)] = E [u(n − M)u(n)] = E [u(n − M − 1)u(n − 1)] = rB , i.e.

go= R−1rB

and the optimal criterion value is

Jb(go) = E [bM (n)]2 = E [d(n)]2 − gT

oRg

o= E [d(n)]2 − gT

orB = r(0) − gT

orB 10 / 34

Page 11: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Relations between Backward and Forward predictors

I Relations between Backward and Forward predictors

go= wB

o

I Useful mathematical result:If the matrix R is Toeplitz, then for all vectors x

(Rx)B = RxB

(Rx)Bi = (RxB )i

(Rx)M−i+1 = (RxB )i

Proof:

(RxB )i =M∑j=1

Ri,j xM−j+1 =M∑j=1

r(i − j)xM−j+1j=M−k+1

=M∑k=1

r(i − M + k − 1)xk

=M∑k=1

RM−i+1,k xk = (Rx)M−i+1 = (Rx)Bi

11 / 34

Page 12: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Relations between Backward and Forward predictors

I The Forward and Backward optimal predictors are solutions of the systems

Rwo = r

Rgo= rB

Rgo= rB = (Rwo )

B = RwBo

and since R is supposed nonsingular, we have

go= wB

o

12 / 34

Page 13: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Augmented Wiener-Hopf equations for Backwardprediction

I Augmented Wiener-Hopf equations for Backward prediction error filterThe optimal Backward predictor filter solution g

oand the optimal Backward prediction error power

satisfy

Rgo− rB = 0

r(0) − (rB )T go= PM

which can be written in a block matrix equation form

[R rB

(rB )T r(0)

] [−g

o1

]=

[0

PM

]

where we can identify the factors as:

[−g

o1

]= cM[

R rB

(rB )T r(0)

]= RM+1 – Autocorrelation matrix of dimensions (M + 1) × (M + 1)

13 / 34

Page 14: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Augmented Wiener-Hopf equations for Backwardprediction

Finally, the augmented Wiener Hopf equations for optimal backward prediction error filter are

RM+1cM =

[0

PM

]or

r(0) r(1) . . . r(M)r(1) r(0) . . . r(M − 1). . . .. . . .. . . .

r(M) r(M − 1) . . . r(0)

cM,0cM,1cM,2..

cM,M

=

r(0) r(1) . . . r(M)r(1) r(0) . . . r(M − 1). . . .. . . .. . . .

r(M) r(M − 1) . . . r(0)

aM,M

aM,M−1aM,M−2

.

.aM,0

=

000..

PM

The augmented Wiener-Hopf equations for forward and backward prediction are very similar and they help in

finding recursive in order recursions, as shown next.

14 / 34

Page 15: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Levinson – Durbin algorithm

I The augmented Wiener-Hopf equations for forward and backward prediction provide a short derivation ofLevinson-Durbin main recursions.

I Stressing the order of the filter: all variables will receive a subscript expressing the order of the predictor:Rm, rm, am,wom

.

I Several order recursive equations can be written for the involved quantities:

rm+1 =[

r(1) r(2) . . . r(m) r(m + 1)]T =

[rm

r(m + 1)

]

Rm+1 =

[Rm rBm

(rBm)T r(0)

]=

[r(0) rTmrm Rm

]

15 / 34

Page 16: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

LD recursion derivation:Let define the vectors ψ and the scalar ∆m−1 and evaluate them using augmented WH equations:

ψ =

[am−1

0

]−

∆m−1

Pm−1

[0

aBm−1

](2)

∆m−1 = rTmaBm−1 = aTm−1rBm = r(m) +

m−1∑k=1

am−1,k r(m − k)

Multiplying the right hand side of Equation (2) by Rm+1 =

[Rm rBm

(rBm)T r(0)

]=

[r(0) rTmrm Rm

]we obtain

Rm+1ψ = Rm+1

{[am−1

0

]−

∆m

Pm−1

[0

aBm−1

]}

=

[Rm rBm

(rBm)T r(0)

] [am−1

0

]−

∆m−1

Pm−1

[r(0) rTmrm Rm

] [0

aBm−1

]

=

[Rmam−1

(rBm)T am−1

]−

∆m−1

Pm−1

[rTmaBm−1

RmaBm−1

]=

[Rmam−1∆m−1

]−

∆m−1

Pm−1

[∆m−1

RmaBm−1

]

=

Pm−10m−1∆m−1

−∆m−1

Pm−1

∆m−10m−1Pm−1

=

Pm−1 −∆2m−1

Pm−10m−1

∆m−1 − ∆m−1

=

Pm−1 −∆2m−1

Pm−10m

16 / 34

Page 17: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

LD recursion proof:

Recall the augmented WH equation

RM+1aM =

[PM0

]

Now using ψ(1) = am−1,0 = 1, and since we suppose Rm nonsingular, the unique solution of

Rm+1ψ =

Pm−1 −∆2m−1

Pm−10m

provides the optimal predictor am = ψ with the recursion (2) and the optimal prediction error power

Pm = Pm−1 −∆2

m−1

Pm−1

= Pm−1(1 −∆2

m−1

P2m−1

) = Pm−1(1 − Γ2m)

with the notation

Γm = −∆m−1

Pm−1

17 / 34

Page 18: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Levinson – Durbin recursions

Levinson – Durbin recursions

am =

[am−1

0

]+ Γm

[0

aBm−1

]Vector form of L – D recursions

am,k = am−1,k + Γmam−1,m−k , k = 0, 1, . . . ,m Scalar form of L – D recursions

∆m−1 = aTm−1rBm = r(m) +

m−1∑k=1

am−1,k r(m − k)

Γm = −∆m−1

Pm−1

Pm = Pm−1(1 − Γ2m)

18 / 34

Page 19: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

LD recursion variables:Interpretation of ∆m and Γm

1. ∆m−1 = E [fm−1(n)bm−1(n − 1)]Proof (Solution of Problem 9 page 238 in [Haykin91])

E [fm−1(n)bm−1(n − 1)] = E [aTm−1u(n)][u(n − 1)T aBm−1] = aTm−1

[rTm

Rm−1 rBm−1

] [−wB

m−11

]

=[

1 −wTm−1

] [rTmaBm−10m−1

]= rTmaBm−1 = ∆m−1

2. ∆0 = E [f0(n)b0(n − 1)] = E [u(n)u(n − 1)] = r(1)

3. Iterating Pm = Pm−1(1 − Γ2m) we obtain

Pm = P0

m∏k=1

(1 − Γ2k )

4. Since the power of prediction error must be positive for all orders, the reflection coefficients are less thanunit in absolute value:

|Γm| ≤ 1 ∀m = 0, . . . ,M

5. Reflection coefficients equal last autoregressive coefficient, for each order m:

Γm = am,m, ∀m = M,M − 1, . . . , 1

19 / 34

Page 20: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Algorithm Levinson-Durbin

Given r(0), r(1), r(2), . . . , r(M)

(for example, estimated from data u(1), u(2), u(3), . . . , u(T ) using r(k) = 1T

∑Tn=k+1 u(n)u(n − k))

1. Initialize ∆0 = r(1), P0 = r(0)

2. For m = 1, . . . ,M

2.1 Γm = −∆m−1

Pm−1

2.2 am,0 = 1

2.3 am,k = am−1,k + Γmam−1,m−k , k = 1, . . . ,m

2.4 ∆m = r(m + 1) +m∑

k=1

am,k r(m + 1 − k)

2.5 Pm = Pm−1(1 − Γ2m)

Computational complexity:For the m–th iteration of Step 2: 2m + 2 multiplications, 2m + 2 additions, 1 divisionThe overall computational complexity: O(M2) operations

20 / 34

Page 21: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Algorithm (L–D) Second form

Given r(0) and Γ1, Γ2, . . . , ΓM

1. Initialize P0 = r(0)

2. For m = 1, . . . ,M

2.1 am,0 = 1

2.2 am,k = am−1,k + Γmam−1,m−k , k = 1, . . . ,m

2.3 Pm = Pm−1(1 − Γ2m)

21 / 34

Page 22: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Inverse Levinson – Durbin algorithm

am,k = am−1,k + Γmam−1,m−k

am,m−k = am−1,m−k + Γmam−1,k

[am,k

am,m−k

]=

[1 ΓmΓm 1

] [am−1,m−kam−1,k

]

and using the identity Γm = am,m

am−1,k =am,k − am,mam,m−k

1 − (am,m)2k = 1, . . . ,m

22 / 34

Page 23: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Inverse Levinson – Durbin algorithm

The second order properties of the AR process are perfectly described by the set of reflection coefficients

This immediately follows from the following property:The sets {P0, Γ1, Γ2, . . . , ΓM} and {r(0), r(1), . . . , r(M)} are in one-to-one correspondenceProof

(a) {r(0), r(1), . . . , r(M)}(Algorithm L – D)

=⇒ {P0, Γ1, Γ2, . . . , ΓM , }(b) From

Γm+1 = −∆m

Pm= −

r(m + 1) +∑m

k=1 am,k r(m + 1 − k)

Pm

we can obtain immediately

rm+1 = −Γm+1Pm −m∑

k=1

am,k r(m + 1 − k)

which can be iterated together with Algorithm L–D form 2, to obtain all r(1), . . . , r(M).

23 / 34

Page 24: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Whitening property of prediction – error filters

I In theory, a prediction – error filter is capable of whitening a stationary discrete-time stochastic processapplied to its input, if the order of the filter is high enough.

I Then all information in the original stochastic process u(n) is represented by the parameters{PM , aM,1, aM,2, . . . , aM,M} (or, equivalently, by {P0, Γ1, Γ2, . . . , ΓM}).

I A signal equivalent (as second order properties) can be generated starting from{PM , aM,1, aM,2, . . . , aM,M} using the autoregressive difference equation model.

I These “analyze and generate” paradigms combine to provide the basic principle of vocoders.

24 / 34

Page 25: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Lattice Predictors

I Order -Update Recursions for Prediction errorsSince the predictors obey the recursive–in–order equations

am =

[am−1

0

]+ Γm

[0

aBm−1

]

aBm =

[0

aBm−1

]+ Γm

[am−1

0

]it is natural that prediction errors can be expressed in recursive–in–order forms. These forms resultsconsidering the recursions for the vector um+1(n)

um+1(n) =

[um(n)

u(n − m)

]um+1(n) =

[u(n)

um(n − 1)

]Combining the equations we obtain

fm(n) = aTmum+1(n) =[

aTm−1 0] [

um(n)u(n − m)

]+ Γm

[0 (aBm−1)

T] [

u(n)um(n − 1)

]=

= aTm−1um(n) + Γm(aBm−1)T um(n − 1) =

= fm−1(n) + Γmbm−1(n − 1)

25 / 34

Page 26: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Lattice Predictors

bm(n) = (aBm)T um+1(n) =[

0 (aBm−1)T

] [u(n)

um(n − 1)

]+ Γm

[(am−1)

T 0] [

um(n)u(n − m)

]=

= (aBm−1)T um(n − 1) + Γm(am−1)

T um(n)

= bm−1(n − 1) + Γmfm−1(n)

The order recursions of the errors can be represented as[fm(n)bm(n)

]=

[1 ΓmΓm 1

] [fm−1(n)

bm−1(n − 1)

]

26 / 34

Page 27: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Lattice Predictors

fm(n) = fm−1(n) + Γmbm−1(n − 1)

bm(n) = bm−1(n − 1) + Γmfm−1(n)

Using the time shifting operator q−1, the prediction error recursions are given by

[fm(n)bm(n)

]=

[1 Γmq−1

Γm q−1

] [fm−1(n)bm−1(n)

]

which can now be iterated for m = 1, 2, . . . ,M to obtain

[fM (n)bM (n)

]=

[1 ΓMq−1

ΓM q−1

] [1 ΓM−1q

−1

ΓM−1 q−1

]. . .

[1 Γ1q

−1

Γ1 q−1

] [f0(n)b0(n)

]

=

[1 ΓMq−1

ΓM q−1

] [1 ΓM−1q

−1

ΓM−1 q−1

]. . .

[1 Γ1q

−1

Γ1 q−1

] [11

]u(n)

Having available the reflexion coefficients, all prediction errors of order m = 1, . . . ,M can be computed using theLattice predictor, in 2M additions and 2M multiplications.

27 / 34

Page 28: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Lattice Predictors

Some characteristics of the Lattice predictor:

1. It is the most efficient structure for generating simultaneously the forward and backward prediction errors.

2. The lattice structure is modular: increasing the order of the filter requires adding only one extra module,leaving all other modules the same.

3. The various stages of a lattice are decoupled from each other in the following sense: The memory of thelattice (storing b0(n − 1), . . . , bM−1(n − 1)) contains orthogonal variables, thus the informationcontained in u(n) is split in M pieces, which reduces gradually the redundancy of the signal.

4. The similar structure of the lattice filter stages makes the filter suitable for VLSI implementation.

28 / 34

Page 29: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Lattice Inverse filters

I The basic equations for one stage of the lattice are

fm(n) = fm−1(n) + Γmbm−1(n − 1)

bm(n) = Γmfm−1(n) + bm−1(n − 1)

and simply rewriting the first equation

fm−1(n) = fm(n) − Γmbm−1(n − 1)

bm(n) = Γm fm−1(n) + bm−1(n − 1)

we obtain the basic stage of the Lattice inverse filter representation.

29 / 34

Page 30: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Lattice Inverse filter and its use as a Synthesis filter

30 / 34

Page 31: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Burg estimation algorithm (not for exam)

I The optimum design of the lattice filter is a decoupled problem.At stage m the optimality criterion is:

Jm = E [f 2m(n)] + E [b2m(n)]

and using the stage m equations

fm(n) = fm−1(n) + Γmbm−1(n − 1)

bm(n) = bm−1(n − 1) + Γmfm−1(n)

Jm = E [f 2m(n)] + E [b2m(n)] = E [(fm−1(n) + Γmbm−1(n − 1))2] + E [(bm−1(n − 1) + Γmfm−1(n))2]

= E [(f 2m−1(n) + b2m−1(n − 1)](1 + Γ2m) + 4ΓmE [bm−1(n − 1)fm−1(n)]

Taking now the derivative with respect to Γm of the above criterion we obtain

d(Jm)

dΓm= 2E [(f 2m−1(n) + b2m−1(n − 1)]Γm + 4E [bm−1(n − 1)fm−1(n)] = 0

and therefore

Γ∗m = −2E [bm−1(n − 1)fm−1(n)]

E [(f 2m−1(n)] + E [b2m−1(n − 1)]

31 / 34

Page 32: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Burg estimation algorithm (not for exam)

I Replacing the expectation operator E with time average operator 1N

∑Nn=1 we obtain one direct way to

estimate the parameters of the lattice filter, starting from the data available in lattice filter:

Γm = −2∑N

n=1 bm−1(n − 1)fm−1(n)∑Nn=1[(f

2m−1(n) + b2m−1(n − 1)]

The parameters Γ1, . . . , ΓM can be found solving first for Γ1, then using Γ1 to filter the data u(n) andobtain f1(n) and b1(n), then find the estimate of Γ2 ....There are other possible estimators, but Burg estimator ensures the condition |Γ| < 1 which is required forthe stability of the lattice filter.

32 / 34

Page 33: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Gradient Adaptive Lattice Filters (not for exam)

I Imposing the same optimality criterion as in Burg method

Jm = E [f 2m(n)] + E [b2m(n)]

the gradient method applied to the lattice filter parameter at stage m is

d(Jm)

dΓm= 2E [fm(n)bm−1(n − 1) + fm−1(n)bm(n)]

and can be approximated (as usually in LMS algorithms) by

∇Jm ≈ 2[fm(n)bm−1(n − 1) + fm−1(n)bm(n)]

We obtain the updating equation for the parameter Γm

Γm(n + 1) = Γm(n) −1

2µm(n)∇Jm = Γm(n) − µm(n)(fm(n)bm−1(n − 1) + fm−1(n)bm(n))

In order to normalize the adaptation step, the following value of µm(n) was suggested

µm(n) =1

ξm−1(n)

where

ξm−1(N) =N∑i=1

[(f 2m−1(i) + b2m−1(i − 1)] = ξm−1(N − 1) + f 2m−1(N) + b2m−1(N − 1)

represents the total energy of forward and backward prediction errors.33 / 34

Page 34: 21006 Lect 6

PREDICTION, PREDICTOR, PREDICTION ERRORFORWARD and BACKWARD prediction

Levinson – Durbin algorithmLattice filters

New AR parametrization: Reflection coefficients

Gradient Adaptive Lattice Filters

I We can introduce a forgetting factor using

ξm−1(n) = βξm−1(n − 1) + (1 − β)[f 2m−1(n) + b2m−1(n − 1)]

with the forgetting factor close to 1, but 0 < β < 1 allowing to forget the old history, which may beirrelevant if the filtered signal is nonstationary.

34 / 34