26
Chapter 3 Wiener Filters 1. Introduction l Wiener filters are a class of optimum linear filters which involve linear estimation of a desired signal sequence from another related sequence. l In the statistical approach to the solution of the linear filtering problem, we assume the availability of certain statistical parameters (e.g. mean and correlation functions) of the useful signal and unwanted additive noise. The problem is to design a linear filter with the noisy data as input and the requirement of minimizing the effect of the noise at the filter output according to some statistical criterion. l A useful approach to this filter-optimization problem is to minimize the mean-square value of the error signal that is defined as the difference between some desired response and the actual filter output. For stationary inputs, the resulting solution is commonly known as the Weiner filter . l The Weiner filter is inadequate for dealing with situations in which nonstationarity of the signal and/or noise is intrinsic to the problem. In such situations, the optimum filter has to be assumed a time-varying form. A highly successful solution to this more difficult problem is found in the Kalman filter .

ASP03

Embed Size (px)

DESCRIPTION

education

Citation preview

Page 1: ASP03

Chapter 3 Wiener Filters 1. Introduction l Wiener filters are a class of optimum linear filters which involve linear

estimation of a desired signal sequence from another related sequence. l In the statistical approach to the solution of the linear filtering problem,

we assume the availability of certain statistical parameters (e.g. mean and correlation functions) of the useful signal and unwanted additive noise. The problem is to design a linear filter with the noisy data as input and the requirement of minimizing the effect of the noise at the filter output according to some statistical criterion.

l A useful approach to this filter-optimization problem is to minimize the

mean-square value of the error signal that is defined as the difference between some desired response and the actual filter output. For stationary inputs, the resulting solution is commonly known as the Weiner filter.

l The Weiner filter is inadequate for dealing with situations in which

nonstationarity of the signal and/or noise is intrinsic to the problem. In such situations, the optimum filter has to be assumed a time-varying form. A highly successful solution to this more difficult problem is found in the Kalman filter.

Page 2: ASP03

2. Linear Estimation with Mean-Square Error Criterion l Fig. 3.1 shows the block schematic of a linear discrete-time filter W(z)

for estimating a desired signal d(n) based on an excitation x(n) Fig. 3.1 l We assume that both x(n) and d(n) are random processes (discrete-time

random signals). The filter output is y(n) and e(n) is the estimation error. l To find the optimum filter parameters, the cost function or performance

function must be selected. In choosing a performance function the following points have to be

considered:

(1) The performance function must be mathematically tractable. (2) The performance function should preferably have a single

minimum so that the optimum set of the filter parameters could be selected unambiguously.

l The number of minima points for a performance function is closely

related to the filter structure. The recursive (IIR) filters, in general, result in performance function that may have many minima, whereas the non-recursive (FIR) filters are guaranteed to have a single global minimum point if a proper performance function is used.

l In Weiner filter, the performance function is chosen to be

])([ 2neE=ξ

This is also called “mean-square error criterion”

-

d(n)

+ e(n) y(n) x(n) W(z)

Page 3: ASP03

3. Wiener Filter - the Transversal, Real-Valued Case

l Fig. 3.2 shows a transversal filter with tap weights w0, w1, … , wN-1 . Fig. 3.2

Let TNwwwW ]...[ 110 −=

)1....( ]...[)( 11T

Nnnn xxxnX +−−=

The output is ∑−

=

−=1

0

)()(N

ii inxwny

)(nXW T= ....(2) )( WnX T= Thus we may write

...(3) )()(

)()()(

WnXnd

nyndneT−=

−=

The performance function, or cost function, is then given by

...(4) )]()([)]()([)]()([)]([

)])()())(()([(

)]([

2

2

WnXnXEWWndnXEndnXEWndE

WnXndnXWndE

neE

TTTT

TT

+−−=

−−=

Now we define the Nx1 cross-correlation vector

...(5) ]...[

)]()([

110T

Nppp

ndnXEP

−=

w0 w1 wN-1

x(n-1)

- d(n)

+ e(n)

y(n)

x(n) z-1 z-1 z-1

Page 4: ASP03

and the NxN autocorrelation matrix

)6...(

)]()([

1,11,10,1

1,11110

1,00100

=

−−−−

NNNN

N

N

T

rrr

rrrrrr

nXnXER

LMMM

L

Also we note that

WPPW

PnXndETT

TT

=

=)]()([

Then we obtain ...(7) 2)]([ 2 WRWPWndE TT +−=ξ

Equation (7) is a quadratic function of the tap-weight vector W with a single global minimum. Note that R has to be a positive definite matrix in order to have a unique minimum point in the w-space.

l Minimization of performance function

To obtain the set of tap weights that minimize the performance function,

we set 1,...,1,0for , 0 −==∂∂

Niwi

ξ

or (8) ... 0=∇ξ

where ∇ is the gradient vector defined as the column vector T

Nwww][

110 −∂∂

∂∂

∂∂

=∇ L

and zero vector 0 is defined as N-component vector T]000[0 L=

Equation (7) can be expanded as

...(9) 2)([1

0

1

0

1

0

2 ∑∑∑−

=

=

=

+−=N

l

N

mlmml

N

lll rwwwpndEξ

and ∑∑−

=

=

1

0

1

0

N

l

N

mlmml rww can be expanded as

...(10) 21

0

1

0

1

0

1

0

1

0iii

N

ilm

immi

N

ill

N

imm

lmml

N

l

N

mlmml rwrwwrwwrww ++= ∑∑∑∑∑

≠≠

≠=

≠=

=

=

Page 5: ASP03

Then we obtain

1,...,2,1,0for

...(11) )(21

0

−=

++−=∂∂ ∑

=

Ni

rrWpw illi

N

lli

i

ξ

By setting 0=∂∂

iwξ , we obtain

...(12) 2)(1

0iilli

N

ll prrW =+∑

=

Note that )()]()([ liinxlnxEr xxli −Φ=−−=

The symmetry property of autocorrelation function of real-valued signal,

we have the relation illi rr =

Equation (12) then becomes

1,...,2,1,0for 1

0

−==∑−

=

Nipwr i

N

llil

In matrix notation, we then obtain

...(13) PWR op =

where opW is the optimum tap-weight vector

Equation (13) is also known as the Wiener-Hopf equation,

which has the solution ...(14) 1 PRWop−=

Assuming that R has inverse.

The minimum value of ξ is

...(15) )]([

)]([2

2min

opT

op

Top

WRWndE

PWndE

−=

−=ξ

Equation (15) can also expressed as ...(16) )]([ 12

min PRPndE T −−=ξ

Page 6: ASP03

l Example 3.1: A Modelling problem

Assuming that )()1(3)(2)( nvnxnxnd +−+=

and v(n) represents a stationary white process σv2=0.1 ,

x(n) is also a white process, with unit varianceσx2=1

Note:

By this assumption, the actual system function of the plant is 2+3z-1

Design an optimum filter (Weiner filter) to model the plant.

Question:

How to choose the number of taps (N) ?

Once N is determined, the procedure to solve the problem is straightforward. If N=2 is chosen p.55(textbook)

then

=

1001

R

=

32

P

and 21

2010 641.13 wwww ++−−=ξ

thus

=

32

1,0

0,0

ww

1.0min =ξ

Plant

-

d(n) +

e(n) y(n) x(n) W(z)

v(n)

Page 7: ASP03

4. Principle of Orthogonality - An Alternative Approach to the Design

of Wiener Filter l The cost function (or performance function) is given by

1,...,2,1,0for

...(18) ])(

)(2[w

...(17) )]([

i

2

−=

∂∂

=∂∂=

Ni

wne

neE

neE

i

ξξ

where e(n)=d(n)-y(n) Since d(n) is independent of wi , we get

...(19) )(

)()(

inx

wny

wne

ii

−−=

∂∂−=

∂∂

Then we obtain

1,...,2,1,0for

...(20) )]()([2

−=

−−=∂∂

Ni

inxneEwi

ξ

l When the Wiener filter tap,

weights are set to their optimal values, 0=∂∂

iwξ .

Hence, if e0(n) is the estimation error when wi are set to their optimal values, then equation (20) becomes

...(21) 1,...,1,0for 0)]()([ 0 −==− NiinxneE That is, the estimation error is uncorrelated with the filter tap inputs, x(n-i). This is known as “ the principles of orthogonality”.

l We can also show that the optimal filter output is also uncorrelated with the estimation error.

That is, ....(22) 0)]()([ 00 =nyneE

This result indicates that the optimized Weiner filter output and the estimation error are “orthogonal”.

Page 8: ASP03

5. Normalized Performance Function l If the optimal filter tap weights are expressed by w0,l , l=0,1,2,… ,N-1.

The estimation error is then given by

...(23) )()()(1

0,00 ∑

=

−−=N

ll lnxwndne

and then

)]()([2)]([)]([)]([

...(24) )()()(

002

02

02

00

nyneEnyEneEndE

nynend

++=

+=

l We may note that

...(26) )]([ min2

0 ξ=neE

and we obtain

...(27) )]([)]([ 20

2min nyEndE −=ξ

l Define ? as the normalized performance function

...(28) )]([ 2 ndE

ξρ =

l ?=1 when y(n)=0 l ? reaches its minimum value, ?min , when the filter tap-weights are

chosen to achieve the minimum mean-squared error.

This gives ...(29) )]([)]([

1 20

min ndEnyE

−=ρ

We have 0 < ?min < 1 .

Page 9: ASP03

6. Wiener Filter - Complex-Valued Case

l In many practical applications, the random signals are complex-valued.

For example, the baseband signal of QPSK & QAM in data transmission systems.

l In the Wiener filter for processing complex-valued random signals, the

tap-weights are assumed to be complex variables. l The estimation error, e(n), is also complex-valued. We may write

...(30) )]()([])([ *2 neneEneE ==ξ

l The tap-weight wi is expressed by wi=wi,R+jwi,I … (31)

The gradient of a function with respect to a complex variable w=wR+jwI

is defined as ...(32) IR

cw w

jw ∂

∂+

∂∂

≡∇

l The optimum tap-weights of the complex-valued Wiener filter will be

obtained from the criterion:

1,...,2,1,0for 0 −==∇ Nicwi

ξ

That is, ...(33) 0 and 0,,

=∂∂

=∂∂

RiRi wwξξ

l Since )]()([ * neneE=ξ , we have

...(34) )]()()()([ ** neneneneE cw

cw

cw iii

∇+∇=∇ ξ

Noting that

...(37) )()(

...(36) )()(

...(35) )()()(

***

1

0

icw

cw

icw

cw

N

kk

winxne

winxne

knxwndne

ii

ii

∇−−=∇

∇−−=∇

−−= ∑−

=

Applying the definition (32), we obtain

Page 10: ASP03

0)(1,,

=+=∂∂

+∂∂

=∇ jjww

jww

wIi

i

Ri

ii

cwi

and 2)(1,

*

,

** =−+=

∂∂

+∂∂

=∇ jjww

jww

wIi

i

Ri

ii

cwi

Thus, equation (34) becomes

...(38) )]()([2 * inxneEcwi

−−=∇ ξ

The optimum filter tap-weights are obtained when 0=∇ ξcwi

. This givens

...(39) 1,...,2,1,0for 0)]()([ *0 −==− NiinxneE

where e0(n) is the optimum estimation error. l Equation (39) is the “principle of orthogonality” for the case of

complex-valued signals in wiener filter. l The Wiener-Hopf equation can be derived as follows:

Define ...(40) ])1()1()([)( TNnxnxnxnx +−−≡ L

and ...(41) ][)( *1

*1

*0

TNwwwnw −≡ L

We can also write ...(42) ])1(*)1(*)(*[)( HNnxnxnxnx +−−≡ L

and ...(43) ][)( 110H

Nwwwnw −≡ L

where H denotes complex-conjugate transpose or Hermitian. Noting that

...(44) )()()( 0 nxwndne H−=

and ...(45) 0)]()([ 0 =nxneE from equation(39)

We have

...(46) 0)])()()(([ 0* =− wnxndnxE H

and then ...(47) 0 pwR =

Page 11: ASP03

where )]()([ nxnxER H=

and )]()([ * ndnxEp =

l Equation (47) is the Wiener-Hopf equation for the case of

complex-valued signals. l The minimum performance function is then expressed as

...(48) )]([ 002

min wRwndE H−=ξ

l Remarks:

In the derivation of the above Wiener filter we have made assumption that it is causal and finite impulse response, for both real-valued and complex-valued signals.

Page 12: ASP03

7. Unconstrained Weiner Filters l The block diagram of a Wiener filter is shown in Fig. 3.1 Fig. 3.1

We assume that the filter W(z) may be non-causal and/or IIR. l We consider only the case the signals and the system parameters are

real-valued. Moreover, we assume that the complex variable z remains on the unit

circle, i.e. 1=z , thus z*=z-1 , 1=z .

7.1 Performance Function l The performance function is defined as

])([ 2neE=ξ

Where e(n)=d(n)-y(n) In terms of autocorrelation and cross-correlation functions, we have

...(49) )0(2)0()0( )]()([2)]([)]([

dd

22

ydyy

ndnyEnyEndEφφφ

ξ−+=

−+=

Using the inverse Z-transform relations, we have

...(50) )(21

2)(21

)0( ∫∫ Φ−Φ+=c

ydc

yydd zdz

zjz

dzz

j ππφξ

Since Y(z)=X(z)W(z), )()()( zzWz xdyd Φ=Φ

If z is selected to be on the unit circle in the z-plane, then

)()( Wand )()()(

),()()(1**2

2

−==

Φ=Φ

zWzzWzWzW

zzWz xxyy

-

d(n)

+ e(n) y(n) x(n) W(z)

Page 13: ASP03

Using these relations in equation(50), we obtain

...(51) )()](2)()([21

)0(

)()(21

2)()(21

)0(

*

2

∫∫

Φ−Φ+=

Φ−Φ+=

cxdxxdd

cxd

cxxdd

zdz

zWzzzWj

zdz

zzWjz

dzzzW

j

πφ

ππφξ

Where the contour of integration, c , is the unit circle. l The performance function given in equation(51) covers IIR and FIR

This is a most general form of the Wiener filter performance function.

Example 3.2

From general expression of ξ , eq.(51) to demonstrate the special case

of real-valued FIR system.

Consider the case where the Wiener filter is an N-tap FIR filter with

system function ...(52) )(1

0∑

=

−=N

l

ll zwzW

Using eq.(52) in the expression of lξ , given by eq.(51), then

)(21

2)(21

)0(

)()(21

2)())((21

)0(

11

0

11

0

1

0

1

0

1

0

∫ ∫∑∑∑

∫ ∑∫ ∑∑

−−−

=

−−−

=

=

=

−+−

=

Φ−Φ⋅+=

Φ−Φ+=

c

lxd

c

N

ll

lmxx

N

l

N

mmldd

cxd

N

l

ll

cxx

mm

N

l

lldd

dzzzj

wdzzzj

ww

zdz

zzwjz

dzzzwzw

j

ππφ

ππφξ

Using the inverse z-transform relation, this gives

∑ ∑∑−

=

=

=

−−−+=1

0

1

0

1

0

)(2)()0(N

l

N

lxdl

N

mxxmldd lwlmww φφφξ

Now using the notations

lmxxxx

lxd

dd

rmllmpl

ndE

=−=−=−

=

)()()(

)]([)0( 2

φφφ

φ

We obtain the expression

∑∑∑−

=

=

=

+−=1

0

1

0

1

0

2 2)]([N

l

N

mlmml

N

lll rwwwpndEξ

Note that

)]()([ Thus)]()([

ndlnxEpndnxEp

l −==

Page 14: ASP03

By definition, )]()([)( * knynxEkxy −=φ

)]()([)]()([)( ndlnxElndnxElxd −=+=−∴ φ

Example 3.3 Modelling with an IIR filter

The plant is modeled by an IIR filter with 11

10

11

)( −

−−

=zwzw

zW

We assume that all involved signals and system parameters are real-valued.

x(n) : input sequence, white process with zero-mean, 12 =xσ .

v(n) : additive noise. x(n) & v(n) are uncorrelated.

Thus, 0)( and 1)( =Φ=Φ zz vxxx

G(z) : system function of the unknown plant.

)()()( 1 zzGz xxxd Φ=Φ −

Using equation(51) to obtain the performance function ξ :

)]()([21

1)0(

1

011

1

01

1

02

1

10

1

01 ∞+−

−+−

−⋅

−+= − G

ww

wGw

wwww

w

www

wwddφξ

l We may find that there can be many local minima, and searching for the

global minimum of ξ may not be a trivial task.

v(n)

- y(n)

Plant d(n) +

e(n) x(n) 11

10

11

−−

zwzw

G(z)

Page 15: ASP03

7.2 Optimum transfer function l From the principle of orthogonality, we have

...(52) 1,...,2,1,0for 0)]()([ 0 −==− NiinxneE

where ...(53) )()()( ,00 ∑∞

−∞=

−−=l

l lnxwndne

Here we assume that all involved signals are real-valued. l Combining eq.(52)&eq.(53), we obtain

...(54) )]()([)]()([,0∑∞

−∞=

−=−−l

l inxndEinxlnxEw

with )()]()([ liinxlnxE xx −=−− φ

and )()]()([ iinxndE dxφ=−

and taking Z-transform on both sides of equation(54), we get ...(55) )()()( 0 zzWz dxxx Φ=Φ

This is referred to as the “Wiener-Hopf” equation of the unconstrained Wiener filtering.

l The optimum unconstrained Wiener filter is given by

...(56) )()(

)(0 zz

zWxx

dx

ΦΦ

=

and

filter Wiener theof responsefrequency theis This

...(57) )()(

)(0 jwxx

jwdxjw

ee

eWΦΦ

=

)( jwdx eΦ : cross-power spectral density of d(n) and x(n)

)( jwxx eΦ : power spectral density of x(n)

Page 16: ASP03

8. Application of Wiener Filters Ⅰ:Modelling

l Consider the modeling problem depicted in Fig. 3.8

l μ(n), ν0(n), andνi(n) are assumed to be stationary, zero-mean and

uncorrelated with one another. l The input to Wiener filter is given by

...(58) )()()( nnnx iυµ += and the desired output is given by

...(59) )()(*)( 0 nngnd n υµ += where gn is the impulse response sample of the plant.

l The optimum unconstrained Wiener filter transfer function

...(60) )()(

)(0 zz

zWxx

dx

ΦΦ

=

Note that

...(61) )()(

)]()([)]()([)]()([)]()([ ))]()())(()([(

)]()([)(

kk

knnEknnEknnEknnEknknnnE

knxnxEk

ii

iiii

ii

xx

υυµµ φφ

υυµυυµµµυµυµ

φ

+=

−+−+−+−=−+−+=

−=

Taking Z-transform on both sides of eq.(61), we get

...(62) )()()( zzziixx υυµµ Φ+Φ=Φ

x(n)

μ(n)

Plant

-

d(n) +

e(n) y(n)

νi(n) W(z)

G(z)

ν0 (n)

Page 17: ASP03

l To calculate )()(

)(0 zz

zWxx

dx

ΦΦ

= , we must first find the expression for )( zdxΦ .

We can show that

)()( ' zz ddx µΦ=Φ

where d’(n) is the plant output when the additive noiseν0(n) is

excluded from that. Moreover, we have

)()()(' zzGzd µµµ Φ=Φ

Thus )()()( zzGzdx µµΦ=Φ

And we obtain

...(63) )()()(

)()(0 zG

zzz

zWiiυυµµ

µµ

Φ+ΦΦ

=

l We note that W0(z) is equal to G(z) only when )(ziiυυΦ is equal to zero.

That is, whenνi(n) is zero for all values of n.

The noise sequenceνi(n) may be thought of as introduced by a

transducer that is used to get samples of the plant input. l Replacing z by ejw in equation(63), we obtain

...(64) )()()(

)()(0

jwjwjw

jwjw eG

eee

eWiiυυµµ

µµ

Φ+ΦΦ

=

l Define ...(65) )()(

)()( jwjw

jwjw

eee

eKiiυυµµ

µµ

Φ+ΦΦ

We obtain W0(e

jw)=K(ejw)G(ejw)

Page 18: ASP03

With some mathematic manipulation, we can find the minimum

mean-square error, minξ , expressed by

...(66) )()())(1(21

)0(2

min 00dweGeeK jwjwjw

µµ

π

πυυ πφξ Φ−+= ∫−

l The best performance that one can expect from the unconstrained

Wiener filter is

)0(00min υυφξ =

and this happens whenνi(n)=0 .

l The Wiener filter attempts to estimate that part of the target signal d(n)

that is correlated with its own input x(n) and leaves the remaining part

of d(n) (i.e. ν0(n)) unaffected. This is known as “ the principles of

correlation cancellation “.

Page 19: ASP03

9. Application of Wiener Filters Ⅱ:Inverse Modelling

l Fig. 3.9 depicts a channel equalization scenario.

s(n) : data samples H(z) : System function of communication channel ?(n) : additive channel noise which is uncorrelated with s(n) W(z) : filter function used to process the received noisy signal samples

to recover the original data samples. l When the additive noise at the channel output is absent, the equalizer

has the following trivial solution:

...(67) )(

1)(0 zH

zW =

This implies that y(n)=s(n) and thus e(n)=0 for all n. Demonstration: Fig. 3.10

l When the channel noise, ?(n), is non-zero, the solution provided by

equation(67) may not be optimal. ...(68) )()(*)( nnshnx n υ+=

and ...(69) )()( nsnd =

where hn is the impulse response of the channel, H(z). From equation(68), we obtain

...(70) )()()()( 2 zzHzz ssxx υυΦ+Φ=Φ

Also,

...(71) )()()()( 1 zzHzz sssxdx Φ=Φ=Φ −

With 1=z , we may also write

...(72) )()()( * zzHz ssdx Φ=Φ

d(n)=s(n)

s(n) -

?(n) + e(n) y(n) x(n) H(z) W(z)

Page 20: ASP03

and then

...(73) )()()(

)()()( 2

*

0zzHz

zzHzW

ss

ss

υυΦ+Φ

Φ=

This is the general solution to the equalization problem when there is no constraint on the equalizer length and, also, it may be let to be non-causal.

l Equation(73) can be rewritten as

...(74) )(

1

)()(

)(1

1)(

2

0 zHzHz

zzW

ss

Φ

Φ+=

υυ

l Let z=ejw and define the parameter

...(75) )(

)()()(

2

jw

jwjwssjw

e

eHee

υυ

ρΦ

Φ≡

where 2

)()( jwjwss eHeΦ and )( jweυυΦ are the signal power spectral

density and the noise power spectral density, respectively, at the channel output. We obtain

...(76) )(

1)(1

)()(0 jwjw

jwjw

eHee

eWρ

ρ+

=

Note that ?(ejw) is a non-negative quantity, since it is the signal-to-noise power spectral density ratio at the equalizer input.

Also, ...(77) 1)(1

)(0 ≤

+≤ jw

jw

ee

ρρ

l Cancellation of ISI and noise enhancement

Consider the optimized equalizer with frequency response given by

)(

1)(1

)()(0 jwjw

jwjw

eHee

eWρ

ρ+

=

In the frequency regions where the noise is almost absent, the value of is very large and hence

)(1

)(0 jwjw

eHeW ≈

Page 21: ASP03

The ISI will be eliminated without any significant enhancement of noise. On the other hand, in the frequency regions where the noise level is high, the value of ?(ejw) is not large and hence the equalizer does not approximate the channel inverse well. This is of course, to prevent noise enhancement.

Page 22: ASP03

10. Noise Cancellation l Fig. 3.12 shows a typical noise canceller block diagram

s(n) : signal source ) : noise source s(n) & ?(n) are uncorrelated H(z) & W(z) are two system functions used to mixed the two input signals to form d(n) & x(n). d(n) : primary input x(n) : Reference input x(n)=?(n)+hn*s(n) … (78) d(n)=s(n)+gn*?(n) … (79)

l Since s(n) & ?(n) are uncorrelated with each other, we obtain

...(81) )()()( zzz dxsdxdx

υΦ+Φ=Φ

where is )( zdxΦ when ?(n)=0 for all n,

and )(zdxυΦ is )( zdxΦ when s(n)=0 for all n.

This is because s(n) and ?(n) are uncorrelated with each other, their

contribution in )( zdxΦ can be considered separately.

Thus, we obtain

...(82) )()()( * zzHz sssdx Φ=Φ

d(n) Primary input

s(n) -

+ e(n)

H(z)

?(n) x(n) Reference input

G(z)

W(z)

Page 23: ASP03

and ...(83) )()()( zzGzdx υυυ Φ=Φ

Recall that 1=z . Finally, we get

...(84) (z))()()()( *υυΦ+Φ=Φ zGzzHz ssdx

and ...(85) )()()(

)()()()()( 2

*

0zHzz

zzGzzHzW

ss

ss

Φ+Φ

Φ+Φ=

υυ

υυ

l Equation(85) for noise cancellation may be thought of as a

generalization of the results we obtained for the modeling and inverse modeling scenarios, given by equation(63) and (73).

l To minimize the mean-square value of the output error, we must strike a

balance between noise cancellation and signal cancellation at the output of the noise canceller. Cancellation of the noise ?(n) occurs when the Wiener filter W(z) is chosen to be close to G(z), and cancellation of the signal s(n) occurs when the Wiener filter W(z) is close to the inverse of H(z). In the sense, we may note that the noise canceller treats s(n) and ?(n) without making any distinction between them.

l Define

)( jwpri eρ : signal-to-noise PSD at primary input

: signal-to-noise PSD at reference input

)( jwout eρ : signal-to-noise PSD at output

...(87) )(

)()()(

...(86) )()(

)()(

2

2

jw

jwss

jw

jwref

jwjw

jwssjw

pri

e

eeHe

eeG

ee

υυ

υυ

ρ

ρ

Φ

Φ=

Φ

Φ=

To calculate )( jwout eρ , we note that s(n) reaches the canceller output

Page 24: ASP03

through two routes : one direct and one through the cascade of H(z)

and W(z) :

...(88) )()()(1)(2 jw

ssjwjwjws

ee eeWeHe Φ−=Φ

Similarly, ?(n) reaches the output through the routes G(z) and W(z) :

...(89) )()()()(2 jwjwjwjw

ee eeWeGe υυυ Φ−=Φ

Replacing W(ejw) by W0(ejw), we obtain

)( )()()(

)()()(1)( 22

22

jw

jwss

jwjw

jwjwjwjws

ee eeeHe

eeHeGe υυ

υυ

υυ ΦΦ+Φ

Φ−=Φ

and )( )()()(

)()()(1)()( 22

222

jw

jwss

jwjw

jwss

jwjwjwjw

ee eeeHe

eeHeGeHe υυ

υυ

υ ΦΦ+Φ

Φ−=Φ

Hence, )( jwout eρ can now be obtained as

...(90) )(

1

)()(

)(

)()(

)(

ref

2

jw

jwss

jw

jw

jwee

jwseejw

out

e

eeH

e

ee

e

ρ

ρ

υυ

υ

=

Φ

Φ=

ΦΦ

This is known as “ power inversion ”. ( Widrow et al, 1975) ** This result shows that the noise canceller works better when

)( jwref eρ is low.

Page 25: ASP03

Example 3.6 Two Omni-directional Antenna in a Receiver desired signal s(n)= a(n)cos nw0

interferer signal ?(n)= ß(n)cos nw0

(jammer)

l s(n) arrives the receiver (antennas) in the direction perpendicular to the line connecting A & B. ?(n) arrives at an angle ?0 with respect to the direction of s(n).

l a(n) and ß(n) are narrowband baseband signals.

Thus s(n) & ?(n) can be treated as narrowband signals concentrated around w=w0 .

l since s(n) & ?(n) may be treated as single tones, and thus a filter with two degrees of freedom is sufficient for optimal filtering.

l Wiener filter coefficients can be determined as follows.

s(n) arrives at the same time at both omni-antennas. ?(n) arrives at B first and arrives at A with a delay

cl 0

0

sinθδ =

the delay is normalized by the time step T which corresponds to one increment of time index n. This gives

cTl 0

0

sinθδ =

d(n)

e(n) +

-

y(n)

w1

w0

l x(n)

x(n) (n)x~

?0

s(n)

B

900

A

?(n)

Page 26: ASP03

Thus

=

+=+=

−+=

)](~[)](~)([)](~)([)]([

sin)(sin)()(~cos)(cos)()(

])cos[()(cos)()(

2

2

00

00

000

nxEnxnxEnxnxEnxE

R

nwnnwnnxnwnnwnns

wnnnwnnd

βαβα

δβα

With some mathematical manipulations, we obtain

+=

1001

2

22βα σσ

R

where 2ασ and 2

βσ are variance of a(n) and ß(n), respectively.

Also,

+=

=

002

0022

sincos

21

)](~)([)[()([

ww

nxndEnxndE

Pδσ

δσσ

β

βα

The Wiener-Hopf equation PWR =0

gives 2200

200

22

0

1sin

)cos(

βαβ

βα

σσδσδσσ

+⋅

+=

ww

W

The signal-to-noise ratio at the output is equal to 22 / αβ σσ

while the signal-to-noise ratio at the reference input is equal to 22 / βα σσ .