03 Spring Final Soln

Embed Size (px)

Citation preview

  • 8/15/2019 03 Spring Final Soln

    1/3

    Solutions to ECE 434 Final Exam, Spring 2003

    Problem 1 (21 points) Indicate true or false for each statement below and justify your answers .(One third credit is assigned for correct true/false answer without correct justication.)(a) If H(z) is a positive type z-transform, then so is cosh( H(z)). (Recall that cosh( x) =

    e x + e − x

    2 .)(b) If X is a m.s. differentiable stationary Gaussian random process, then for t xed, X t isindependent of the derivative at time t: X t .(c) If X = ( X t : t∈ IR) is a WSS, m.s. differentiable, mean zero random process, then X is meanergodic in the mean square sense.(d) If M = ( M t : t ≥0) is a martingale with E [M 2t ] < ∞ for each t, then E [M 2t ] is increasing in t(i.e. E [M 2s ] ≤E [M 2t ] whenever s ≤ t .)(e) If X 1 , X 2 , . . . , is a sequence of independent, exponentially distributed random variables withmean one, then there is a nite constant K such that P [X 1 + · · ·+ X n ≥2n] ≤K exp(−n 2 ) for alln.(f) If X and Y are random variables such that E [X 2 ] < ∞ and E [X |Y ] = Y , thenE [X 2 |Y 2 ] = E [X |Y ]2 .(g) If N = ( N t : t ≥0) is a random process with E [N t ] = λt and E [N s N t ] = λ min{s, t }for s, t ≥0,then N is a Poisson process.(a) TRUE, The power series expansion of cosh(z) about zero is absolutely convergent yieldingcosh(H(z )) = 1 + (H (

    z )) 2

    2! + (H ( z )) 4

    4! + · · ·, and sums and products of positive type functions are positive type.(b) TRUE, since C X X (0) = C X (0) = 0, and uncorrelated Gaussians are independent.(c) FALSE, for a counter example let X t = U for all t , where U is a mean zero random variable with 0 < Var( U ) < ∞.(d) TRUE, since for s < t , (M t −M s )⊥M s , so E [M 2t ] = E [M 2s ] + E [(M t −M s )2 ] ≥E [M 2s ].(e) FALSE, since Cramèrs theorem implies that for any > 0, P [X 1 + · · ·+ X n ≥ 2n ] ≥ exp(−(l(2) + )n ) for allsufficiently large n .(f) FALSE. For example let X have mean zero and positive variance, and let Y ≡0.(g) FALSE. For example it could be that N t = W t + λt , where W is a Wiener process with parameter σ 2 = λ .

    Problem 2 (12 points) Let N be a Poisson random process with rate λ > 0 and let Y t =

    t

    0 N s ds.

    (a) Sketch a typical sample path of Y and nd E [Y t ].(b) Is Y m.s. differentiable? Justify your answer.(c) Is Y Markov ? Justify your answer.(d) Is Y a martingale? Justify your answer.(a) Your sketch should show that Y is continuous and piecewise linear. The slope of Y is 0 on the rst interval, 1 onthe second interval, 2 on the third interval, etc. E [Y t ] = t0 E [N s ]ds = t0 λsds = λt 22 .(b) YES, since the integral of a m.s. continuous process is m.s. differentiable. Y = N .(c) N0, because knowing both Y t and Y t − for a small > 0 determines the slope N t with high probability, allowinga better prediction of Y t + than knowing Y t alone.(d) NO, for example because a requirement for Y to be a martingale is E [Y t ] = E [Y 0 ] for all t.

    Problem 3 (12 points) Let X t = U √ 2 cos(2πt ) + V √ 2 sin(2πt ) for 0≤

    t

    ≤1, where U and V are

    independent, N (0, 1) random variables, and let N = ( N τ : 0≤τ ≤1) denote a real-valued Gaussianwhite noise process with RN (τ ) = σ2 δ (τ ) for some σ2 ≥ 0. Suppose X and N are independent.Let Y = ( Y t = X t + N t : 0≤ t ≤1). Think of X as a signal, N as noise, and Y as an observation.(a) Describe the Karhunen-Loève expansion of X . In particular, identify the nonzero eigenvalue(s)and the corresponding eigenfunctions.There is a complete orthonormal basis of functions ( φn : n ≥1) which includes the eigenfunctionsfound in part (a) (the particular choice is not important here), and the Karhunen-Loève expan-sions of N and Y can be given using such basis. Let Ñ i = ( N, φ i ) = 10 N t φi (t)dt denote the ith

    1

  • 8/15/2019 03 Spring Final Soln

    2/3

    coordinate of N . The coordinates ( Ñ 1 , Ñ 2 , . . .) are N (0, σ 2 ) random variables and U, V, Ñ 1 , Ñ 2 , . . .are independent. Consider the Karhunen-Loève expansion of Y , using the same orthonormal basis.(b) Express the coordinates of Y in terms of U, V, Ñ 1 , Ñ 2 , . . . and identify the corresponding eigen-values (i.e. the eigenvalues of ( RY (s, t ) : 0 ≤s, t ≤1)).(c) Describe the minimum mean square error estimator

    U of U given Y = ( Y t : 0 ≤ t ≤ 1), and

    nd the minimum mean square error. Use the fact that observing Y is equivalent to observing therandom coordinates appearing in the KL expansion of Y .(a)Let φ1 (t ) = √ 2 cos(2πt ) and φ2 (t ) = √ 2 sin(2 πt ). Then φ1 and φ2 are orthonormal and the K-L expansion of X issimply X = U φ1 (t )+ V φ2 (t ). The nonzero eigenvalues are λ 1 = λ 2 = 1. We could write X ↔(U,V, 0, 0, . . . ) . Remark:Since λ1 = λ2 , any linear combination of these two eigenfunctions is also an eigenfunction. Any two orthonormalfunctions with the same linear span as φ1 and φ2 could be used in place of the two functions given. For example,another correct choice of eigenfunctions is ξn (ω) = exp(2 πjnt ) for integers n . We also know this choice works be-cause X is a WSS periodic random process. For this choice, ξ1 and ξ− 1 are the eigenfunctions with correspondingeigenvalues λ − 1 = λ 1 = 1, and {ξ1 , ξ− 1} and {φ1 , φ 2} have the same linear span.(b) Y ↔ (U + Ñ 1 , V + Ñ 2 , Ñ 3 , Ñ 4 , . . . ) and the eigenvalues are 2 , 2, 1, 1, . . . . (c) Observing Y = ( Y t : 0 ≤ t ≤ 1) isequivalent to observing the coordinates of Y . Only the rst coordinate of Y is relevant – the other coordinates of Y areindependent of U and the rst coordinate of Y . Thus, we need to estimate U given Ỹ 1 , where Ỹ 1 = ( Y, φ 1 ) = U + Ñ 1 .The estimate is given by Cov( U, Ỹ 1 )Var ( Ỹ 1 ) Ỹ 1 =

    11+ σ 2 Ỹ 1 and the covariance of error is Var

    ( U ) Var ( Ñ 1 )Var ( U )+ Var ( Ñ 1 ) = σ 21+ σ 2 .

    Problem 4 (7 points) Let (X k : k ∈ ZZ) be a stationary discrete-time Markov process with statespace {0, 1} and one-step transition probability matrix P =

    3/ 4 1/ 41/ 4 3/ 4 . Let Y = ( Y t : t∈ IR)

    be dened by Y t = X 0 + ( t ×X 1 ).(a) Find the mean and covariance functions of Y .(b) Find P [Y 5 ≥3].(a) The distribution of X k for any k is the probability vector π solving π = P π , or π = ( 12 ,

    12 ). Thus P [(X 0 , X 1 ) =

    (0, 0)] = P [(X 0 , X 1 ) = (1 , 1)] = 1234 =

    38 , and P [(X 0 , X 1 ) = (0 , 1)] = P [(X 0 , X 1 ) = (1 , 0)] =

    12

    14 =

    18 . Thus,

    E [X i ] = E [X 2i ] = 12 and E [X 0 X 1 ] = 38 so Var( X i ) =

    14 and Cov( X 0 , X 1 ) =

    38 −( 12 )2 = 18 . Thus, E [Y t ] = 1+ t2 and

    Cov( Y s , Y t ) = Cov( X 0 + sX 1 , X 0 + tX 1 ) = Var( X 0 ) + ( s + t )Cov( X 0 , X 1 ) + st Var( X 1 ) = 14 + s + t

    8 + st

    4 .

    (b) Since Y 5 = X 0 + 5 X 1 , the event {Y 5 ≥3} is equal to the event {X 1 = 1}, which has probability 0.5.Problem 5 (6 points) Let Z be a Gauss-Markov process with mean zero and autocorrelationfunction R Z (τ ) = e−| τ | . Find P [Z 2 ≥1 + Z 1 |Z 1 = 2 , Z 0 = 0].Since Z is a Markov process, the conditional distribution of Z 2 given Z 0 and Z 1 depends only on Z 1 . Note the if Z 2is estimated by Z 1 , then the minimum mean square error estimator is E [Z 2 |Z 1 ] = Cov(

    Z 2 ,Z 1 ) Z 1Var ( Z 1 ) = e

    − 1 Z 1 , and the

    estimation error is independent of Z 1 and is Gaussian with mean zero and variance Var( Z 2 ) −Cov (Z 2 ,Z 1 ) 2

    Var ( Z 1 ) = 1 −e− 2 .Thus, given Z 1 = 2, the conditional distribution of Z 2 is Gaussian with mean 2 e− 1 and variance 1 −e− 2 . Thus, thedesired conditional probability is the same as the probabilty a N (2e− 1 , 1 −e− 2 ) random variable is greater than orequal to 3. This probability is Q 3− 2e

    − 1

    √ 1− e − 2 .

    Problem 6 (10 points) Let X be a real-valued, mean zero stationary Gaussian process withRX (τ ) = e−| τ | . Let a > 0. Suppose X 0 is estimated by X 0 = c1 X − a + c2 X a where the constants c1and c2 are chosen to minimize the mean square error (MSE).(a) Use the orthogonality principle to nd c1 , c2 , and the resulting minimum MSE, E [(X 0 − X 0 )2 ].(Your answers should depend only on a.)(b) Use the orthogonality principle again to show that X 0 as dened above is the minimum MSEestimator of X 0 given (X s : |s| ≥a). (This implies that X has a two-sided Markov property.) (a)Theconstants must be selected so that X 0 − X 0 ⊥X a and X 0 − X 0 ⊥X − a ., or equivalently e−

    a −[c1 e− 2a + c2 ] = 0 and

    2

  • 8/15/2019 03 Spring Final Soln

    3/3

    e− a −[c1 + c2 e− 2a ] = 0. Solving for c1 and c2 (one could begin by subtracting the two equations) yields c1 = c2 = cwhere c = e

    − a

    1+ e − 2 a = 1

    e a + e − a = 1

    2cosh ( a ) .

    The corresponding minimum MSE is given by E [X 20 ]−E [ X 20 ] = 1 −c2 E [(X − a + X a )2 ] = 1 −c2 (2+2 e− 2a ) = 1− e

    − 2 a

    1+ e 2 a =Tanh(a).(b) The claim is true if ( X 0 −

    X 0 )⊥X u whenever |u | ≥a .

    If u ≥ a then E [(X 0 −c(X − a + X a )) X u ] = e− u

    − 1

    e a + e − a (e− a − u

    + ea − u

    ) = 0 .Similarly if u ≤ −a then E [(X 0 −c(X − a + X a )) X u ] = eu − 1e a + e − a (ea + u + e− a + u ) = 0 .The orthogonality condition is thus true whenever |u | ≥a , as required.Problem 7 (12 points)Suppose X and N are jointly WSS, mean zero, continuous time random processes with RXN ≡0.The processes are the inputs to a system with the block diagram shown, for some transfer functionsK 1 (ω) and K 2 (ω):

    K 2+1K Y=X +N

    out out X

    N Suppose that for every value of ω, K i (ω) = 0 for i = 1 and i = 2. Because the two subsystems arelinear, we can view the output process Y as the sum of two processes, X out , due to the input X ,plus N out , due to the input N . Your answers to the rst four parts should be expressed in termsof K 1 , K 2 , and the power spectral densities S X and S N .(a) What is the power spectral density S Y ?(b) What is the signal-to-noise ratio at the output (equal to the power of X out divided by the powerof N out )?(c) Suppose Y is passed into a linear system with transfer function H , designed so that the outputat time t is X t , the best (not necessarily causal) linear estimator of X t given (Y s : s ∈ IR). FindH .(d) Find the resulting minimum mean square error.(e) The correct answer to part (d) (the minimum MSE) does not depend on the lter K 2 . Why?(a) S Y = |K 1 K 2 |2 S X + |K 2 |2 S N , where for notational brevity we suppress the argument ( ω) for each function.(b) S NR output =

    ∞−∞ | K 1 K 2 |2 S X

    dω2 π

    ∞−∞ | K 2 | 2 S N dω2 π

    .

    (c) H = S XY S Y = K 1 K 2 S X

    | K 1 K 2 | 2 S X + | K 2 | 2 S N .(d)

    MMSE = ∞−∞

    S X −S Y |H |2dω2π

    = ∞−∞

    |K 2 |2 S X S N |K 1 |2 |K 2 |2 S X + |K 2 |2 S N

    dω2π

    = ∞−∞

    S X S N

    |K 1 |2 S X + S N dω2π

    (e) Since K 2 is invertible, for the purposes of linear estimation of X , using the process Y is the same as using theprocess Y ltered using a system with transfer function 1K 2 ( ω ) . Equivalently, the estimate of X would be the same if the lter K 2 were dropped from the original system.

    3