SSPre-PFIEV-Chapter4-2012

Embed Size (px)

Citation preview

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    1/38

    Dept. of Telecomm. Eng.Faculty of EEE

    SSPre2012DHT, HCMUT1

    References

    [1] Athanasios Papoulis,Probability, Random Variables, andStochastic Processes, McGraw-Hill, 1991 (3rd Ed.), 2001 (4th Ed.).

    [2] Steven M. Kay, Fundamentals of Statistical Signal Processing:

    Estimation Theory, Prentice Hall, 1993.

    [3] Alan V. Oppenheim, Ronald W. Schafer,Discrete-Time Signal

    Processing, Prentice Hall, 1989.

    [4] Dimitris G. Manolakis, Vinay K. Ingle, Stephen M. Kogon,

    Statistical and Adaptive Signal Processing, Artech House, 2005.

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    2/38

    Dept. of Telecomm. Eng.Faculty of EEE

    SSPre2012DHT, HCMUT

    Chapter 4:

    Mean Square Estimation

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    3/38

    Dept. of Telecomm. Eng.Faculty of EEE

    SSPre2012DHT, HCMUT3

    4. Minimum Mean Square Estimation (1)

    Given some information that is related to an unknown quantity of

    interest, the problem is to obtain a good estimate for the unknown in

    terms of the observed data.

    Suppose represent a sequence of random

    variables about whom one set of observations are available, and Y

    represents an unknown random variable. The problem is to obtain a

    good estimate for Yin terms of the observations

    Let

    represent such an estimate for Y.

    Note that can be a linear or a nonlinear function of the observation

    . Clearly,

    represents the error in the above estimate, and the square of the error.

    nXXX ,,, 21

    nXXX ,,, 21

    )(),,,( 21 XXXXY n == (4-1)

    )(

    nXXX ,,, 21

    )()( XYYYX == (4-2)2||

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    4/38

    Dept. of Telecomm. Eng.Faculty of EEE

    SSPre2012DHT, HCMUT

    4. Minimum Mean Square Estimation (2)

    Since is a random variable, represents the mean square error.

    One strategy to obtain a good estimator would be to minimize the mean

    square error by varying over all possible forms of and this proceduregives rise to the Minimization of the Mean Square Error (MMSE)

    criterion for estimation. Thus under MMSE criterion, the estimator is

    chosen such that the mean square error is at its minimum.

    Next we show that the conditional mean ofYgivenXis the best estimatorin the above sense.

    Theorem 1: Under MMSE criterion, the best estimator for the unknown Y

    in terms of is given by the conditional mean ofYgivesX.

    Thus

    4

    }||{2E

    ),(

    ),(}||{

    2E

    nXXX ,,, 21

    }.|{)( XYEXY == (4-3)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    5/38

    Dept. of Telecomm. Eng.Faculty of EEE

    SSPre2012DHT, HCMUT

    4. Minimum Mean Square Estimation (3)

    Proof : Let represent an estimate ofYin terms of

    Then the error and the mean square

    error is given by

    Since

    we can rewrite (4-4) as

    where the inner expectation is with respect to Y, and the outer one is with

    respect to Thus

    5

    )( XY =

    ).,,,( 21 nXXXX = ,YY=

    }|)(|{}||{}||{ 2222 XYEYYEE === (4-4)

    }]|{[][ XzEEzEzX

    = (4-5)

    }]|)(|{[}|)(|{

    z

    2

    z

    22XXYEEXYE YX

    ==

    .X

    +

    =

    =

    .)(}|)(|{

    }]|)(|{[

    2

    22

    dxXfXXYE

    XXYEE

    X

    (4-6)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    6/38

    Dept. of Telecomm. Eng.Faculty of EEE

    SSPre2012DHT, HCMUT

    4. Minimum Mean Square Estimation (4)

    To obtain the best estimator we need to minimize in (4-6)

    with respect to . In (4-6), since

    and the variable appears only in the integrand term, minimization

    of the mean square error in (4-6) with respect to is

    equivalent to minimization of with respect to .

    Since X is fixed at some value, is no longer random,and hence minimization of is equivalent to

    This gives

    or

    6

    2

    ,0)( XfX ,0}|)(|{2 XXYE

    2

    }|)(|{ 2 XXYE

    )(X}|)(|{

    2XXYE

    .0}|)(|{ 2 =

    XXYE

    (4-7)

    0}|)({| = XXYE

    .0}|)({}|{ = XXEXYE (4-8)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    7/38

    Dept. of Telecomm. Eng.Faculty of EEE

    SSPre2012DHT, HCMUT

    4. Minimum Mean Square Estimation (5)

    But

    since when is a fixed number Using (4-9) in (4-8) we

    get the desired estimator to be

    Thus, the conditional mean ofYgiven represents the bestestimator for Ythat minimizes the mean square error.

    The minimum value of the mean square error is given by

    7

    ),(}|)({ XXXE = (4-9)

    )(, XxX = ).(x

    }.,,,|{}|{)( 21 nXXXYEXYEXY === (4-10)

    nXXX ,,, 21

    .0)}|{var(

    ]}|)|(|{[}|)|(|{

    )var(

    222

    min

    =

    ==

    XYE

    XXYEYEEXYEYE

    XY

    (4-11)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    8/38

    Dept. of Telecomm. Eng.Faculty of EEE

    SSPre2012DHT, HCMUT

    4. Minimum Mean Square Estimation (6)

    As an example, suppose is the unknown. Then the best MMSE

    estimator is given by

    Clearly if then indeed is the best estimator for Yin terms

    ofX. Thus the best estimator can be nonlinear.

    8

    3XY =

    .}|{}|{ 33 XXXEXYEY === (4-12)

    3XY = 3 XY =

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    9/38

    Dept. of Telecomm. Eng.Faculty of EEE

    SSPre2012DHT, HCMUT

    4. Minimum Mean Square Estimation (7)

    Example: Let

    where k> 0 is a suitable normalization constant. To determine the best

    estimate for Yin terms ofX, we need

    Thus

    9

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    10/38

    Dept. of Telecomm. Eng.Faculty of EEE SSPre2012DHT, HCMUT

    4. Minimum Mean Square Estimation (8)

    Hence the best MMSE estimator is given by

    Once again the best estimator is nonlinear. In general the best estimator

    is difficult to evaluate, and hence next we will examine the

    special subclass of best linear estimators.

    10

    .1

    )1(

    3

    2

    1

    1

    3

    2

    13

    2

    )|(}|{)(

    2

    2

    2

    31

    2

    3

    1

    2

    1

    21

    1

    2

    1

    22

    |

    x

    xx

    x

    x

    x

    y

    dyydyy

    dyxyfyXYEXY

    x

    xxx x

    y

    x XY

    ++=

    =

    =

    =====

    }|{ XYE

    (4-14)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    11/38

    Dept. of Telecomm. Eng.Faculty of EEE SSPre2012DHT, HCMUT

    Best Linear Estimator (1)

    In this case the estimator is a linear function of the observations

    Thus

    where are unknown quantities to be determined. The mean

    square error is given by

    and under the MMSE criterion should be chosen so that the

    mean square error is at its minimum possible value. Let

    represent that minimum possible value. Then

    To minimize (4-16), we can equate:

    11

    Y

    .,,, 21 nXXX

    =

    =+++=n

    iiinnl XaXaXaXaY

    12211 .

    (4-15)

    naaa ,,, 21 )( lYY=

    }||{}|{|}||{222

    ==iil XaYEYYEE (4-16)

    naaa ,,, 21

    }||{2E

    2

    n

    ==

    n

    i

    iiaaa

    n XaYEn 1

    2

    ,,,

    2

    }.|{|min21 (4-17)

    ,n.,,kEa

    k

    21,0}|{| 2 ==

    (4-18)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    12/38

    Dept. of Telecomm. Eng.Faculty of EEE SSPre2012DHT, HCMUT

    Best Linear Estimator (2)

    This gives

    But

    Substituting (4-19) in to (4-18), we get

    or the best linear estimator must satisfy

    12

    .02

    ||

    }|{|

    *22

    =

    =

    =

    kkk aEaEEa

    (4-19)

    .

    )()(11

    k

    k

    n

    i

    ii

    kk

    n

    i

    ii

    k

    Xa

    Xa

    a

    Y

    a

    XaY

    a=

    =

    =

    ==(4-20)

    ,0}{2}|{| *

    2

    ==

    k

    k

    XEa

    E

    .,,2,1,0}{ * nkXEk

    == (4-21)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    13/38

    Dept. of Telecomm. Eng.Faculty of EEE SSPre2012DHT, HCMUT

    Best Linear Estimator (3)

    Notice that in (4-21), represents the estimation error

    and represents the data. Thus from (4-21), the error is

    orthogonal to the data for the best linear estimator. This isthe orthogonality principle.

    In other words, in the linear estimator (4-15), the unknown constants

    must be selected such that the error is

    orthogonal to every data for the best linear estimator thatminimizes the mean square error.

    Interestingly a general form of the orthogonality principle holds good in

    the case of nonlinear estimators also.

    Nonlinear orthogonality rule: Let represent any functionalform of the data and the best estimator for Ygiven With

    , we shall show that

    13

    =n

    i iiXaY

    1),(

    nkXk =1,

    nkXk =1,

    naaa ,,, 21 ==

    n

    i iiXaY

    1

    nXXX ,,, 21

    )(Xh

    }|{ XYE .X

    }|{ XYEYe =

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    14/38

    Dept. of Telecomm. Eng.Faculty of EEE SSPre2012DHT, HCMUT

    Best Linear Estimator (4)

    implying that

    This follows since

    Thus in the nonlinear version of the orthogonality rule the error is

    orthogonal to any functional form of the data.

    14

    ,0)}({ =XehE (4-22)

    ).(}|{ XhXYEYe =

    .0)}({)}({

    ]}|)([{)}({})(]|[{)}({

    )}(])|[{()}({

    ==

    =

    =

    =

    XYhEXYhE

    XXYhEEXYhEXhXYEEXYhE

    XhXYEYEXehE

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    15/38

    Dept. of Telecomm. Eng.Faculty of EEE SSPre2012DHT, HCMUT

    Best Linear Estimator (5)

    The orthogonality principle in (4-21) can be used to obtain the unknowns

    in the linear case.

    For example suppose n = 2, and we need to estimate Yin terms ofX1 and

    X2 linearly. Thus

    From (4-21), the orthogonality rule gives

    Thus

    or

    Solving (4-23) to obtain in terms of the cross-correlations.

    15

    naaa ,,, 21

    2211 XaXaYl +=

    0}){(}X{

    0}){(}X{*

    22211

    *

    2

    *

    12211

    *

    1

    ====

    XXaXaYEE

    XXaXaYEE

    }{}|{|}{

    }{}{}|{|

    *

    22

    2

    21

    *

    21

    *

    12

    *

    121

    2

    1

    YXEaXEaXXE

    YXEaXXEaXE

    =+

    =+

    =

    }{

    }{

    }|{|}{

    }{}|{|

    *

    2

    *

    1

    2

    1

    2

    2

    *

    21

    *

    12

    2

    1

    YXE

    YXE

    a

    a

    XEXXE

    XXEXE(4-23)

    21 and aa

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    16/38

    Dept. of Telecomm. Eng.Faculty of EEE SSPre2012DHT, HCMUT

    Best Linear Estimator (6)

    The minimum value of the mean square error in (4-17) is given by

    But using (4-21), the second term in (4-24) is zero, since the error is

    orthogonal to the dataXi where are chosen to be optimum.

    Thus the minimum value of the mean square error is given by

    where are the optimum values from (4-21).

    16

    2

    n

    }.{min}{min

    })({min}{min

    }|{|min

    *

    1,,,

    *

    ,,,

    1

    *

    ,,,

    *

    ,,,

    2

    ,,,

    2

    2121

    2121

    21

    l

    n

    ii

    aaaaaa

    n

    iii

    aaaaaa

    aaan

    XEaYE

    XaYEE

    E

    nn

    nn

    n

    =

    =

    =

    ==

    =

    (4-24)

    naaa ,,, 21

    }{}|{|

    }){(}{

    1

    *2

    1

    **2

    =

    =

    =

    ==

    n

    iii

    n

    iiin

    YXEaYE

    YXaYEYE

    naaa ,,, 21

    (4-25)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    17/38

    Dept. of Telecomm. Eng.Faculty of EEE SSPre2012DHT, HCMUT

    Best Linear Estimator (7)

    Since the linear estimate in (4-15) is only a special case of the general

    estimator in (4-1), the best linear estimator that satisfies (4-21)

    cannot be superior to the best nonlinear estimator Often the bestlinear estimator will be inferior to the best estimator in (4-3).

    This raises the following question: Are there situations in which the best

    estimator in (4-3) also turns out to be linear? In those situations it is

    enough to use (4-21) and obtain the best linear estimators, since they alsorepresent the best global estimators. Such is the case ifYand

    are distributed asjointly Gaussian.

    We summarize this in the next theorem and prove that result.

    Theorem 2: If and Yare jointly Gaussian zero meanrandom variables, then the best estimate for Yin terms of is

    always linear.

    17

    )(X

    }.|{ XYE

    nXXX ,,, 21

    nXXX ,,, 21 nXXX ,,, 21

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    18/38

    Dept. of Telecomm. Eng.Faculty of EEE SSPre2012DHT, HCMUT

    Best Linear Estimator (8)

    Proof : Let

    represent the best (possibly nonlinear) estimate ofY, and

    the best linear estimate ofY. Then from (4-21)

    is orthogonal to the data Thus

    Also from (4-28),

    Using (4-29)-(4-30), we get

    18

    }|{),,,( 21 XYEXXXY n == (4-26)

    =

    =n

    i

    iilXaY

    1

    (4-27)

    1

    n

    l i ii

    Y Y Y a X =

    = = (4-28)

    .1, nkXk =

    .1,0}{ nkXE*

    k == (4-29)

    .0}{}{}{1

    == =

    n

    iii XEaYEE (4-30)

    .1,0}{}{}{ ** nkXEEXE kk === (4-31)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    19/38

    Dept. of Telecomm. Eng.Faculty of EEE SSPre2012DHT, HCMUT

    Best Linear Estimator (9)

    From (4-31), we obtain that andXk are zero mean uncorrelated random

    variables for But itself represents a Gaussian random

    variable, since from (4-28) it represents a linear combination of a set ofjointly Gaussian random variables. Thus and X are jointly Gaussian

    and uncorrelated random variables. As a result, andX are independent

    random variables. Thus from their independence

    But from (4-30), and hence from (4-32)

    Substituting (4-28) into (4-33), we get

    or

    19

    .1 nk =

    }.{}|{ EXE = (4-32)

    ,0}{ =E

    .0}|{ =XE (4-33)

    0}|{}|{ 1

    == = XXaYEXE

    n

    i

    ii

    (4-34).}|{}|{1

    1

    l

    n

    i

    ii

    n

    i

    iiYXaXXaEXYE ===

    ==

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    20/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Best Linear Estimator (10)

    From (4-26), represents the best possible estimator, and

    from (4-34), represents the best linear estimator. Thus the best

    linear estimator is also the best possible overall estimator in the Gaussian

    case. Next we turn our attention to prediction problems using linear

    estimators.

    20

    )(}|{ xXYE =

    =n

    i iiXa1

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    21/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (1)

    Suppose are known and is unknown. Thus

    and this represents a one-step prediction problem. If the unknown is

    then it represents ak-step ahead prediction problem. Returning back tothe one-step predictor, let represent the best linear predictor. Then

    where the error

    is orthogonal to the data, i.e.,

    21

    nXXX ,,, 21 1+nX ,1+= nXY

    1

    +nX

    ,knX +

    11

    = ,n

    n i i

    i

    X a X+=

    (4-35)

    ,1,

    1

    1

    1

    12211

    1111

    ==

    ++++=+==

    +

    +

    =

    +

    =+++

    n

    n

    iii

    nnn

    n

    iiinnnn

    aXa

    XXaXaXa

    XaXXX

    (4-36)

    .1,0}{ * nkXEkn

    == (4-37)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    22/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (2)

    Using (4-36) in (4-37), we get

    Suppose represents the sample of a wide sense stationary stochastic

    processX(t) so that

    Thus (4-38) becomes

    Expanding (4-40) for we get the following set of linear

    equations

    22

    +

    =

    ===1

    1

    **

    .1,0}{}{

    n

    ikiikn nkXXEaXE (4-38)

    iX

    ** )(}{ikkiki

    rrkiRXXE === (4-39)

    .1,1,0}{ 1

    1

    1

    *nkaraXE

    n

    n

    i

    kiikn==== +

    +

    = (4-40)

    ,,,2,1 nk =

    .0

    20

    10

    10

    *

    33

    *

    22

    *

    11

    121302

    *

    11

    1231201

    nkrrararara

    krrararara

    krrararara

    nnnn

    nnn

    nnn

    ==+++++

    ==+++++

    ==+++++

    (4-41)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    23/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (3)

    Similarly using (4-25), the minimum mean square error is given by

    The n equations in (4-41) together with (4-42) can be represented as

    23

    .

    }){(

    }{}{}|{|

    01

    *

    23

    *

    12

    *

    1

    1

    1

    *

    1

    1

    1

    *

    1

    *

    1

    *22

    rrararara

    raXXaE

    XEYEE

    nnnn

    n

    i

    ini

    n

    i

    nii

    nnnn

    +++++=

    ==

    ===

    +

    =+

    +

    =+

    +

    (4-42)

    .

    0

    0

    0

    0

    1

    2

    3

    2

    1

    0

    *

    1

    *

    1

    *

    10

    *

    2

    *

    1

    20

    *

    1

    *

    2

    110

    *

    1

    210

    =

    n

    n

    nn

    nn

    n

    n

    n

    a

    a

    a

    a

    rrrr

    rrrr

    rrrr

    rrrr

    rrrr

    (4-43)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    24/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (4)

    Let

    Notice that Tn is Hermitian Toeplitz and positive definite. Using (4-44),the unknowns in (4-43) can be represented as

    24

    .

    0*

    1

    *

    1

    *

    110*

    1

    210

    =

    rrrr

    rrrr

    rrrr

    T

    nn

    n

    n

    n

    (4-44)

    =

    =

    1

    2

    2

    13

    2

    1

    of

    column

    Last

    0

    0

    0

    0

    1

    n

    n

    n

    n

    n T

    T

    a

    a

    a

    a

    (4-45)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    25/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (5)

    Let

    Then from (4-45),

    Thus

    25

    .

    1,12,11,1

    1,22221

    1,11211

    1

    =

    ++++

    +

    +

    nn

    n

    n

    n

    n

    n

    n

    nnn

    n

    nnn

    n

    TTT

    TTT

    TTT

    T

    (4-46)

    .

    1

    1,1

    1,2

    1,1

    2

    2

    1

    =

    ++

    +

    +

    nn

    n

    n

    n

    n

    n

    n

    n

    T

    T

    T

    a

    a

    a

    ,01

    1,1

    2 >=++ nn

    n

    nT

    (4-47)

    (4-48)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    26/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (6)

    and

    Eq. (4-49) represents the best linear predictor coefficients, and they can beevaluated from the last column ofTn in (4-45). Using these, the best one-

    step ahead predictor in (4-35) taken the form

    and from (4-48), the minimum mean square error is given by the

    (n +1, n +1) entry of

    26

    .

    1

    1,1

    1,2

    1,1

    1,1

    2

    1

    =

    ++

    +

    +

    ++

    nn

    n

    n

    n

    n

    n

    nn

    n

    n T

    T

    T

    T

    a

    a

    a

    (4-49)

    .)(1

    1

    1,

    1,11 =

    +

    +++

    =

    n

    ii

    ni

    nnn

    n

    n XTT

    X (4-50)

    .1

    nT

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    27/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (7)

    From (4-36), since the one-step linear prediction error

    we can represent (4-51) formally as follows

    Thus, let

    them from the above figure, we also have the representation

    27

    ,11111 XaXaXaX nnnnnn ++++= + (4-51)

    n

    n

    nnn zazazaX ++++

    + 1 12

    1

    1

    1

    ,1)( 12

    1

    1 n

    nnn zazazazA

    ++++= (4-52)

    .)(

    1

    1+

    nn

    n

    XzA

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    28/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (8)

    The filter

    represents anAR(n) filter, and this shows that linear prediction leads to

    an auto regressive (AR) model.

    The polynomial in (4-52)-(4-53) can be simplified using

    (4-43)-(4-44). To see this, we rewrite as

    28

    n

    nnn zazazazA

    zH

    ++++

    ==12111

    1

    )(

    1)(

    (4-53)

    )(zAn

    )(zAn

    =

    =

    +++++=

    2

    n

    11)1(

    2

    1

    1)1(

    12

    1

    )1(

    21

    0

    0

    0

    ]1,,,,[

    1

    ]1,,,,[

    1)(

    n

    nn

    n

    nn

    nn

    nn

    n

    Tzzza

    a

    a

    zzz

    zazazazazA

    (4-54)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    29/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (9)

    To simplify (4-54), we can make use of the following matrix identity

    Taking determinants, we get

    In particular if we get

    Using (4-57) in (4-54), with

    29

    .

    0

    0

    1

    =

    BCADC

    A

    I

    ABI

    DC

    BA

    (4-55)

    .

    1BCADADC

    BA = (4-56)

    ,0D

    .0

    )1(

    1

    C

    BA

    ABCA

    n= (4-57)

    ===

    2

    1)1(

    0

    ,],1,,,,[

    n

    n

    nnBTAzzzC

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    30/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (10)

    we get

    Referring back to (4-43), using Cramers rule to solve for

    we get

    30

    .

    1

    ||

    01,,,

    0

    0

    0

    ||

    )1()(

    1)1(

    10

    *

    2

    *

    1

    110

    *

    1

    210

    2

    1

    2

    =

    =

    zzz

    rrrr

    rrrr

    rrrr

    T

    zz

    T

    TzA

    nn

    nn

    n

    n

    n

    n

    n

    n

    n

    n

    n

    n

    (4-58)

    ),1(1 =+na

    1||

    ||

    ||1201

    10

    2

    1 ===

    +n

    n

    n

    n

    n

    n

    n

    nT

    T

    T

    rr

    rr

    a

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    31/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (11)

    or

    Thus the polynomial (4-58) reduces to

    31

    .0

    ||

    ||

    1

    2 >=n

    n

    n

    T

    T (4-59)

    .1

    1

    ||1)(

    1

    2

    1

    1

    1)1(

    10

    *

    2

    *

    1

    110

    *

    1

    210

    1

    n

    nn

    nn

    nn

    n

    n

    n

    n

    zazaza

    zzz

    rrrr

    rrrr

    rrrr

    TzA

    ++++=

    =

    (4-60)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    32/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (12)

    The polynomial in (4-53) can be alternatively represented as

    in (4-60), and in fact represents a stableAR filter of

    order n, whose input error signal is white noise of constant spectral

    height equal to and output is

    It can be shown that has all its zeros in provided

    thus establishing stability.

    32

    )(zAn

    )(~)(

    1

    )( nARzAzH n=

    n

    ||/|| 1nn TT .1+nX

    )(zAn 1|| >z 0|| >nT

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    33/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (13)

    Linear prediction error

    From (4-59), the mean square error using n samples is given by

    Suppose one more sample from the past is available to evaluate

    ( i.e., are available). Proceeding as above the new

    coefficients and the mean square error can be determined.

    From (4-59)-(4-61),

    Using another matrix identity it is easy to show that

    33

    .0||

    ||

    1

    2 >=n

    nn

    T

    T (4-61)

    1+nX

    011 ,,,, XXXX nn

    21+n

    .||

    || 121

    n

    nn

    T

    T++ = (4-62)

    ).||1(||

    ||||

    2

    1

    1

    2

    1 +

    + = nn

    n

    ns

    T

    TT (4-63)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    34/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (14)

    Since we must have or for every n.

    From (4-63), we have

    or

    since Thus the mean square error decreases as more and

    more samples are used from the past in the linear predictor. In general

    from (4-64), the mean square errors for the one-step predictor form a

    monotonic non-increasing sequence

    whose limiting value

    34

    ,0|| >kT 0)||1(2

    1 > +ns 1|| 1

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    35/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (15)

    Clearly, corresponds to the irreducible error in linear

    prediction using the entire past samples, and it is related to the power

    spectrum of the underlying process through the relation

    where represents the power spectrum of

    For any finite power process, we have

    and since Thus

    35

    02

    )(nTX2

    1exp ln ( ) 0.

    2XX

    S d

    +

    = (4-66)

    ( ) 0XX

    S

    ( ) ,XX

    S d

    +

    <

    )(nTX

    ( ( ) 0), ln ( ) ( ).XX XX XX

    S S S

    ln ( ) ( ) .XX XXS d S d + +

    < (4-67)

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    36/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (16)

    Moreover, if the power spectrum is strictly positive at every frequency,

    i.e.,

    then from (4-66)

    and hence

    i.e., for processes that satisfy the strict positivity condition in (4-68)

    almost everywhere in the interval the final minimum mean square

    error is strictly positive (see (4-70)). i.e., such processes are not

    completely predictable even using their entire set of past samples, or they

    are inherently stochastic, since the next output contains information that is

    not contained in the past samples. Such processes are known as regular

    stochastic processes, and their power spectrum is strictly positive.

    36

    ( ) 0, in - ,XXS > < < (4-68)

    ln ( ) .XX

    S d

    +

    >

    2

    1exp ln ( ) 0

    2XX

    S d e

    +

    = > =

    (4-69)

    (4-70)

    ),,(

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    37/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT, HCMUT

    Linear Prediction (17)

    37

    )(XX

    S

    Power spectrum of a regular stochastic process:

    Conversely, if a process has the following power spectrum,)(

    XXS

    1 2

    such that in then from (4-70),( ) 0XX

    S = 21

  • 8/2/2019 SSPre-PFIEV-Chapter4-2012

    38/38

    Dept. of Telecomm. Eng.

    Faculty of EEE

    SSPre2012

    DHT HCMUT

    Linear Prediction (18)

    Such processes are completely predictable from their past data samples. In

    particular

    is completely predictable from its past samples, since consists of

    line spectrum.

    in (4-71) is a shape deterministic stochastic process.

    38

    += k kkk tanTX )cos()( (4-71)

    ( )XX

    S

    )(XX

    S

    1

    k

    )(nTX