Upload
phungmien
View
240
Download
2
Embed Size (px)
Citation preview
11/10/12 EC3500.FallFY13/MPF - Section II 1
II - Random Signals/ Processes• [p. 3] Random signal/sequence definition• [p. 4] Signal mean, variance, autocorrelation & autocovariance, normalized cross-correlation• [p. 15] Statistical characterization of random signals
–
I.I.D. Random process– Stationarity–
Wide sense stationarity (wss)–
Jointly wide sense stationarity (jointly wss)–
Correlation & cross-correlation for stationary RPs–
Signal average–
Ergodicity•
[p. 34] Specific examples of RPs:–
Binary signal, white noise, colored noise, Bernoulli process, Random walk, counting process, Poisson process, MA process
• [p. 75] Periodic Random process Definition• [p. 82] Uncorrelated Random process Definition• [p. 83] Cyclostationary Random process Definition•
[p. 85] Multiple Random Processes Joint Properties• [p. 90] Application to data analysis –
How to assess signal stationarity• [p. 94] Application to data analysis -
How to check IID assumption--
Autocorrelation --
Lag plot• [p. 98] Application to data analysis -
How to extract an IID sequence out of an non IID sequence• [p. 105] Application: target range detection• [p. 107] Introduction to the spectrogram• [p. 111] Application: Gas furnace reaction time • [p. 114] Application: Detection of the periodicity of stationary signals in noisy environments• [p. 116] How to estimate correlation lags; biased/unbiased estimator issues• [p. 125] Frequency domain description for a stationary process
–
Power spectral density (PSD), definition & properties–
Cross Power spectral density , definition & properties• [p. 146] Appendices• [p. 170] References
11/10/12 EC3500.FallFY13/MPF - Section II 2
Examples•
[p. 5] Example 1•
[p. 7] Example 2•
[p. 9] Example 3•
[p. 24] Example 4•
[p. 29] Example 5•
[p. 32] Example 6•
[p. 34] Example 7•
[p. 43] Example 8•
[p. 50] Example 9•
[p. 64] Example 10•
[p. 67] Example 11•
[p. 70] Example 12•
[p. 76] Example 13•
[p. 78] Example 14•
[p. 80] Example 15•
[p. 87] Example 16
•
[p. 93] Example 17•
[p. 97] Example 18•
[p. 104] Example 19•
[p. 110] Example 20•
[p. 114] Example 21•
[p. 115] Example 22•
[p. 121] Example 23•
[p. 128] Example 24•
[p. 129] Example 25•
[p. 130] Example 26•
[p. 132] Example 27
11/10/12 EC3500.FallFY13/MPF - Section II 3
•
Consider sequence x(t) =x(t,ξ) ←
for a fixed t, x(t) is a Random
Variable (RV)
• x(t): random signal (→ can be infinite dimensional)
• x(t,ξ) for fixed RV ξ: called realization/trial of the random process
•
if x(t)is
discrete for a fixed ξ, the RP x(n)= x(nT,ξ )is called a discrete random process
Random Process Signal/Sequence:
A RP is a mapping function that attributes a function x(t) = x(t,ξ) to each outcome of the random experiment
•
x(t, ξ1
)
x(t, ξ3
)
x(t, ξ2
)
ξ1•
••ξ2
ξ3
t
t
t
11/10/12 EC3500.FallFY13/MPF - Section II 4
4 Types of Random Processes
Discrete time / discrete valued
Discrete time/ continuous valued
Continuous time/ discrete valued
Continuous time / continuous valued
11/10/12 EC3500.FallFY13/MPF - Section II 5
Example 1: x(t)=x(t,ξ) = ξcos(πt/10), where ξ = U[0,1].
x(n)=x(nT, ϕ) = cos(πt/10+ϕ), where ϕ = U[0,2π].
11/10/12 EC3500.FallFY13/MPF - Section II 6
Signal mean value (ensemble average):
Signal variance: ( ) ( ){ }( ){ } ( )
22
2 2
( )x x
x
t E x t m t
E x t m t
σ = −
= −
n1n
x[n]
n2
2 1n n= −Lag
(dimensionless)
( ) ( ){ }xm t E x t=
Discrete process case Continuous process case
t1
t
x(t) 2 1t tτ = −Time lag (sec)
t2
11/10/12 EC3500.FallFY13/MPF - Section II 7
x(t,ξ) = ξcos(πt/10), where ξ = U[0,1].Example 2
Compute process mean and variance
11/10/12 EC3500.FallFY13/MPF - Section II 8
Signal autocorrelation function:
( ) ( ) ( ) ( ){ }*1 2 1 2 1 2, ,xx xR t t R t t E x t x t= =
measures the dependency between 2 values of the process at two different times.
Allows to evaluate: 1) How quickly a random signal changes with respect to time.2) The amount of “memory”
a signal may have.
2) Whether the process has a periodic component and what the expected frequency might be, etc…
t1
t
x(t) 2 1t tτ = −lag
t2
11/10/12 EC3500.FallFY13/MPF - Section II 9
Let x(t) be a real valued process defined as
Compute mx (t)Compute: Rx (t1 ,t2 )
Example 3
x(n, ξ1
)
x(n, ξ3
)
x(n, ξ2
)
ξ1•
••ξ2
ξ3
x(t,ξ) = cos(πt/10+ξ), where ξ = U[0,2π].
11/10/12 EC3500.FallFY13/MPF - Section II 10
11/10/12 EC3500.FallFY13/MPF - Section II 11
11/10/12 EC3500.FallFY13/MPF - Section II 12
( ) ( )( ) ( ) ( )( ){ }( ) ( )
*
1 2 1 1 2 2
*
1 2 1 2
( , )
( , )
xx x x
x x x
C t t E x m x m
R t t m m
t t t t
t t
= − −
= −
Signal autocovariance function (remove impact due to process mean):
Signal normalized correlation function (remove impact due to process mean and normalizes max value to 1):
( ) ( )( ) ( )
1 21 2
1 2
,, x
xx x
C t tt t
t tρ
σ σ= ( )1 2| , | 1 !!x t tρ ≤
11/10/12 EC3500.FallFY13/MPF - Section II 13
Signal cross-correlation function:
Signal cross-covariance function:
( ) ( ) ( ) ( )*1 2 1 2 1 2, ,xy xy x yC t t R t t m t m t= −
( )1 2,xyR t t =
•
Measures the dependency between values of two processes at two different times.
• Allows to evaluate whether/how two processes are related
•
Similar to cross-correlation function: measures the dependency between values of two processes at two different times,
but also
• Removes
impact of mean values.
Note: unless there is a good reason to keep the signal
means, remove or use covariance based
expressions!
11/10/12 EC3500.FallFY13/MPF - Section II 14
Normalized cross-correlation function:
( ) ( )( ) ( )
1 21 2
1 2
,, xy
xyx y
C t tt t
t tρ
σ σ= ( )1 2| , | 1 !!xy t tρ ≤
11/10/12 EC3500.FallFY13/MPF - Section II 15
•
Random processes are characterized by joint distribution (or density) of samples
• Fx (x1
, x2
, …, xk , t1
,…, tk ) = Pr [x(t1
) ≤
x1
, …
x(tk ) ≤
xk ]
•
F(.) is highly complex to compute -
difficult or impossible to obtain in practice
Statistical Characterization of Random Processes:
11/10/12 EC3500.FallFY13/MPF - Section II 16
Independent, Identically Distributed (I.I.D.) Random Process:
A Random Process is said to be:
•
An independent process (i.e., independent of itself at earlier and/or later times) if for any ti :
Fx (x1
, x2
,…,xk ;t1 ,…,tk ) = F1 (x1 ;t1 )…Fk (x2 ;tk )
• An I.I.D. process if all RVs have the same pdf fx (x)
Note: I.I.D. processes have no memory, where a future value would depend on past values they can be viewed as building blocks for more realistic random processes
• Mean of I.I.D. Process:
mx (t) = E{x(t)} =
11/10/12 EC3500.FallFY13/MPF - Section II 17
Independent, Identically Distributed (I.I.D.) RP, cont’
Autocovariance of IID RP:
{ }{ } { }
{ }
*1 2 1 1 2 2
*1 1 2 2 1 2
21 1 1 2
( , ) ( ( ) ( ))( ( ) ( ))
( ( ) ( )) ( ( ) ( )) ,
| ( ( ) ( )) | ,
=
x x x
x x
x
C t t E x t m t x t m t
E x t m t E x t m t t t
E x t m t t t
= − −
⎧ − − ≠⎪= ⎨− =⎪⎩
Autocorrelation of IID RP:
Rx (t1
, t2
) =
11/10/12 EC3500.FallFY13/MPF - Section II 18
( , ) 0.05* , ~ (0,1)x t t
Nξ ξ
ξ= +
I.I.D. process ?
[ ( , )]E x t ξ
11/10/12 EC3500.FallFY13/MPF - Section II 19
Stationarity Concept:
If x(t) is stationary for all orders N = 1, 2, … x(t) is said to be strict-sense stationary.
• If x(t) is stationary for order N = 1,
•Stationary up to order 2 → wide-sense stationary (WSS).
( ; ) ( ; ) x xf x t f x t T⇒ = +
Pdf is identical for all times instants t
Definition: a RP is said to be stationary if any joint density or distribution function depends only on the spacing between time instants, not
where on the timeline the time instants occur.
fx (x1
, …, xN ; t1
, …, tN ) = fx (x1
, …, xN ; t1
+T ,…, tN +T) for any ti , T & any joint pdf
11/10/12 EC3500.FallFY13/MPF - Section II 20
Stationarity of order N=1 - Physical interpretation for a discrete process
••
••
••
••
••
••
••
••
••
••
•
••
••
•
•
••
•••
•
•••
••
• •••
•
••
••
•••
•
...
...
...
n
n
n
x(n, ξ1
)
x(n, ξ3
)
x(n, ξ2
)
••
••
••
••
••
••
••
••
••
•• •
•
•
••
•••
• ...
...
n
n
n
x(n, ξ4
)
•
••
•
•••
••
• •••
•
••
••
•••
•
... x(n, ξP
)
x(n, ξ5
)
Experiment is performed P timesleads to P time sequences
How to computeFx (x1
; n1
) = Pr [x(n1
) ≤
x1
][Probability that the functions x(n,ξ) do not exceed x1 at time n1 ]
• Select values for x1 and n1•
Count the number of trials K for which x(n1 ) ≤ x1
Fx (x1 ; n1 ) = Pr [x(n1 ) ≤ x1 ]= K/P
x1
x1
x1
x1
x1
x1
n1[11]
11/10/12 EC3500.FallFY13/MPF - Section II 21
Stationarity of order N=2 - Physical interpretation for a discrete process
Experiment is performed P timesleads to P time sequences
How to computeFx (x1
, x2 ; n1
, n2
) = Pr [x(n1
) ≤
x1
, x(n2 ) ≤
x2
]
[Probability that the functions x(n,ξ) do not exceed x1 at time n1 and x2 at time n2 ]
• Select values for x1 , x2, n1 , n2•
Count the number of trials K for which x(n1 ) ≤ x1 and x(n2 ) ≤ x2
Fx (x1 ,x2 ; n1 , n2 ) = K/P
••
••
••
••
••
••
••
••
••
••
•
••
••
•
•
••
•••
•
•••
••
• •••
•
••
••
•••
•
...
...
...
n
n
n
x(n, ξ1
)
x(n, ξ3
)
x(n, ξ2
)
••
••
••
••
••
••
••
••
••
•• •
•
•
••
•••
• ...
...
n
n
n
x(n, ξ4
)
•
••
•
•••
••
• •••
•
••
••
•••
•
... x(n, ξP
)
x(n, ξ5
)
x1
x1
x1
x1
x1
x1
n1x2
x2
x2
x2
x2
x2
n2 [11]
11/10/12 EC3500.FallFY13/MPF - Section II 22
Wide-Sense Stationarity Concept:
Definition: a RS x(t) is called wide-sense stationary (WSS) if
(1) the mean is a constant independent of the time “t”
(2) the autocorrelation depends only on the time lag distance τ= t1
−
t2
Consequence:
The variance is a constant independent of “t”
( ){ } ( )x xE x t m t m= =
( ) ( )( ) ( ) ( ){ }
1 2 1 2
*
,x x
x
R t t R t t
R E x t x tτ τ
= −
= = −
( ) ( ) ( )( ){ } ( ){ } ( )( )( )
2 222
2 20
x x x
x x x
t E x t m t E x t m t
R m
σ
σ
= − = −
= − =
11/10/12 EC3500.FallFY13/MPF - Section II 23
Correlation Function Properties for wss x(t)
(1) Conjugate symmetry
(2)
Rx (τ) max at τ = 0 and Rx (0)>0
(can we have Rx (0)=0?)
(3) Rx (τ) measures the “predictability”
of the RP (or the memory present in the process)
( ) ( )*x xR Rτ τ= −
11/10/12 EC3500.FallFY13/MPF - Section II 24
Example 4 - a RP process consists of 4 possible sample functions occurring with equal likelihood
1)
Find the mean and correlation function2)
Is the RP wss?
1 2 3 4( ) 1, ( ) 4, ( ) cos( ), ( ) sin( )x t x t x t t x t t= = = =
11/10/12 EC3500.FallFY13/MPF - Section II 25
11/10/12 EC3500.FallFY13/MPF - Section II 26
Definition: x(t) and y(t) are said to be w.s. jointly stationary if:
1) x(t) and y(t) are wss stationary
2) Rxy (t1
, t2
) = Rxy (t1 −t2 )
Consequence: when x(t) and y(t) are w.s. jointly stationary:
Wide-Sense Stationarity, cont’
( ) ( ) ( ) ( ) ( ){ }( ) ( )
( )( ) ( )( ){ } ( )
*1 0 1 0 1 0
1 0 1 0
*
,
,
=
xy xy xy
xy xy
x y xy x y
R t t R t t R t t E x t y t
C t t C t t
E x t m y t m R m m
τ τ
τ
τ
τ τ∗
= − − = −
= − =
− − − = −
11/10/12 EC3500.FallFY13/MPF - Section II 27
• Cross correlation/covariance function properties:
Rxy (τ)=
Cxy (τ)=
•
Coherence (Normalized covariance, also called normalized correlation coefficient) function is defined as
measures the predictability of the RP (easier to judge than by using Rx (τ)) as it is a bounded quantity
2 | ( ) | 1( )( ) xxx
x
C τρ τσ
ρ τ ≤= !!
Wide-Sense Stationarity, cont’
11/10/12 EC3500.FallFY13/MPF - Section II 28
• Normalized cross-correlation function is defined as
( )( ) | ( ) | 1xy
xy xx
yy
C τρ τ
σρ τ
σ≤=
!!
Wide-Sense Stationarity, cont’
Measures the amount of common information between 2 RPs delayed between each other by time lag τ.
Concept used as basis for radar detection schemes.
11/10/12 EC3500.FallFY13/MPF - Section II 29
Example 5 - x(t,ξ) = exp[j(πt/10+ξ)], where ξ = U[0,2π].
y(t,ξ) = exp[j(πt/10+ξ’)], where ξ’
= U[0,π].
1)
Compute Rxy (t1 ,t2 ), assume
ξ and
ξ’
independent2) Are the processes j. wss?
11/10/12 EC3500.FallFY13/MPF - Section II 30
11/10/12 EC3500.FallFY13/MPF - Section II 31
• in many applications only one realization of a RP is available
•
in general, one single member doesn’t provide information about the statistics of the process
•
except when process is stationary +ergodic: statistical information cannot be derived from one realization of RP, i.e., time averages
• Def: a RP is called ergodic if:
all ensemble averages = all corresponding time averages
• Def: a RP is said to be ergodic in the mean if:
Ergodicity:
Random Process (time) Average:
• Def: a wss RP is said to be ergodic in correlation at lag τ if:
0
000
1( ) lim ( , )2
t
tt
x t x t dtt
ξ→+∞
−
= ∫
0
000
1( ) lim ( , ) *( , )2
t
x tt
R x t x t dtt
τ ξ τ ξ→+∞
−
= −∫
[ ( , )] ( , )E x t x tξ ξ=< >
11/10/12 EC3500.FallFY13/MPF - Section II 32
Process can be stationary and NOT ergodic
Ergodicity, cont’
Example 6 - Assume RP which is a dc voltage waveform where the pdf for the voltage is given by U[0, 10].
1)
Plot several possible trials for the RP2)
Is the process wss ?
3)
Is the process ergodic in the mean?
x(t,ξ )
11/10/12 EC3500.FallFY13/MPF - Section II 33
11/10/12 EC3500.FallFY13/MPF - Section II 34
Example 7 - Binary signal
Assume the RP x(t) takes values ±1 with equal probability. Assume -
the average number of switches per unit time is α
-
the probability of exactly k switches in an interval t is Poisson distributed with
-
Assume x(0)=1 to get started1)
Plot a possible trial for the RP
2)
Compute the mean and correlation function
Specific Random Processes
( ) ( ) / !t kP Z k e t kα α−= =
See derivations in Appendix A
11/10/12 EC3500.FallFY13/MPF - Section II 35
11/10/12 EC3500.FallFY13/MPF - Section II 36
11/10/12 EC3500.FallFY13/MPF - Section II 37
11/10/12 EC3500.FallFY13/MPF - Section II 38
White noise RP
Definition: A RP w(t) is called a white noise process with mean 0 and variance σ2
w iffE{w(t)} = 0
& Rw (τ)
= σ 2w δ(τ)
Notes:1) All frequencies contribute the same amount (as in the case of white light, therefore the name of “white noise”)
2) No constraint on the pdf. If the pdf of w(t) is Gaussian: it is called “white Gaussian noise”
11/10/12 EC3500.FallFY13/MPF - Section II 39
x=randn(1200,1);[rx,lags]=xcorr(x,50,'biased');figuresubplot(211),plot(x),title('White Gaussian noise')xlabel('Sample
number')subplot(212),plot(lags,rx)
11/10/12 EC3500.FallFY13/MPF - Section II 40
Colored noise RP
Definition: A non periodic random noise sequence w(t) is called a colored noise process if Rw (τ)
is not zero at k≠0 and
Notes:1) All frequencies do NOT contribute the same amount (as was the case for white noise)2) No constraint on the pdf. If the pdf of w(t) is Gaussian: it is called “colored Gaussian noise”3) Colored can easily be generated by passing white noise through a filter
lim ( ) 0wRτ
τ→ ∞
=
11/10/12 EC3500.FallFY13/MPF - Section II 41
x=randn(1200,1);h=(1/30.)*ones(30,1);y=filter(h,1,x); %
basic averaging filter[ry,lags]=xcorr(y,50,'biased');subplot(211),plot(y),title('Colored Gaussian noise')xlabel('Sample
number')subplot(212),plot(lags,ry)title('Correlation
sequence'),xlabel('Lag
number')
How do we justify the shape of the correlation plot?
11/10/12 EC3500.FallFY13/MPF - Section II 42
11/10/12 EC3500.FallFY13/MPF - Section II 43
Example 8 - Consider the RP x(t) shown below. Check whether the process is ergodic in the mean and/or ergodic in correlation.
x1 (t)=K
x2 (t)=-K
1 1/ 2P =
2 1/ 2P =
11/10/12 EC3500.FallFY13/MPF - Section II 44
11/10/12 EC3500.FallFY13/MPF - Section II 45
11/10/12 EC3500.FallFY13/MPF - Section II 46
Bernoulli Random Processa binary sequence &
independent samples
• Probabilistic description:
Pr (x(0) = 1, x(1) = 1, x(2) = −1]) =
n
x(n)
. . .
x(n)
= 1 with probability P = −1 with probability (1 −
P)
for P = 1/2 process is called binary white noise
Mean Variance
11/10/12 EC3500.FallFY13/MPF - Section II 47
Random Walk Random Process
• Consider a sequence of I.I.D. RVs {Xi }
• Define
•The process Sn is called simple random walk when Xi = ±
1 (Bernoulli RVs)
•
When P = 1/2 and Xi = ±
1 (i.e., for Bernoulli process): discrete Wiener process
Turns out:[ ( )] 0
var( ( ))E x n
x n n=
=
( )
1( ) ( ), 1,....
( ) ( 1) ( ) sum process( ) 1/ ( ) mean process
n
kS n X k n
S n S n X nM n n S n
=
= =
= − + ←
= ←
∑
Is this a wss process ?
11/10/12 EC3500.FallFY13/MPF - Section II 48
RP Example - Random Walk, cont’Sequence of I.I.D. RVs {Xi } & S(n) = X1
+ X2
+ . . . + Xn , n = 1, 2, . .
•Property: S(n) has independent increments in non-overlapping time intervals
( ) ( )2 1
1 2
2 3
2 1 1 1
1
3 2 1
( ) ( )
( ) ( )
n n
n n
n n
S n S n X X X X
X X
S n S n X X
+
+
− = + + − + +
= + +
− = + +
n1 n2 n3
11/10/12 EC3500.FallFY13/MPF - Section II 49
Random Walk, cont’ General Character
• Tends to have long runs of positive and negative values.
•
Length of runs increases with increasing time, local behavior remains the same.
s = rand(1,10000);r = cumsum(( (s > 0.5) *2) -
1);
Random walks applications found inEconomics: to model shares prices,Physics: to model random movement of molecules in liquids and gases, Vision science: used to describe eye movements, Psychology: to explain the relation between the time needed to make a decision and the probability that a certain decision will be made,
11/10/12 EC3500.FallFY13/MPF - Section II 50
Example 9 - You are given the simple random walk process S(n).Compute P[S(n)=-1] after n=3 steps.
11/10/12 EC3500.FallFY13/MPF - Section II 51
11/10/12 EC3500.FallFY13/MPF - Section II 52
RP Example – Counting Discrete Random Process
• Consider a sequence of I.I.D. RVs {Xi }
• Define
where X(k) is a Bernoulli RP with
X(k)=1 with probability p
X(k)=0 with probability 1-p
•Property: S(n) has independent increments in non-overlapping time intervals
0( ) ( ), 0,1,....
n
kS n X k n
=
= =∑Counts occurrences of events, or number of events, that occur
up to time n
11/10/12 EC3500.FallFY13/MPF - Section II 53
Example: Define N(t) as the number of packets arriving at a server up to time t, then {N(t)} is a counting process in which the “event”
parameter is defined as the number of packets arriving at the server up to time t.
Note that N(t) defined as the number of packets arriving at the server at time t is NOT a counting process.
RP Example – Counting Discrete Random Process, cont’
11/10/12 EC3500.FallFY13/MPF - Section II 54
RP Example – Counting Random Process, cont’, examples
11/10/12 EC3500.FallFY13/MPF - Section II 55
RP Example – Counting Random Process
• Useful for modeling events occurring in time
• Counts the number of events N(t) in the time interval [0,t]
•
Widely used : physics (distribution of radio-active counts), computer networking (server requests), etc…
t1 t2 t3 t4 t5 t6
N(t)
4321 t
Inter arrival time zi= ti -ti-1 Arrival time
11/10/12 EC3500.FallFY13/MPF - Section II 56
RP Example – Counting Random Process, cont’
t1 t2 t3 t4 t5 t6
N(t)
4321 t
Process properties( ) is defined as the number of events in [0,t] with:
( ) 0, (0) 0(t) takes only integer values & is monotonically increasing( ) - ( ) represents the number of events in the i
N tN t NNN t N s
•
≥ =
( i.e., number of events in any 2 non-overlapping time intervals are
independent RVs)
nterval [ , ]( ) has independent increments in non-overlapping time intervals
t sN t
→
11/10/12 EC3500.FallFY13/MPF - Section II 57
RP Example – Poisson (Counting) Random Process
•The Poisson RP can be viewed as an special case of the counting random process when the number of events in any interval of length T follows a Poisson pdf with mean λΤ, where λ
is defined as
the arrival rate ( events or arrivals / unit time
).
1 0( ) 1 01 0
( ( ))[ ( ) ( ) ] , 0,1, 2,...!
kt t t tP N t N t k e k
kλ λ− − −
− = = =
Process properties:Recall ( ) is defined as the number of events in [0, ], and
( ) 0, (0) 0( ) takes only integer values & is monotonically increasing( ) has independent increments in non-ov
N t tN t NN tN t
•
≥ =
+ the number of events (i.e., arrivals) in any interval is a Poisson RV
erlappin
g time int
with m
erva
ea
ls
n T
Tλ
Caution: Specified “unit time” is important
11/10/12 EC3500.FallFY13/MPF - Section II 58
RP Example – Poisson (Counting) Random Process, cont’
t1 t2 t3 t4 t5 t6
N(t)
4321 t
z2 z3 z4
• Inter-arrival times in a Poisson process
Inter-arrival times zi in a Poisson process are an I.I.D. sequence with exponential pdf of rate λ, i.e.,
2 2
( ) ( )
[ ] 1/ 1/
zz
z
f z e u z
E z
λλ
λ σ λ
−=
⇒ = =
x4
11/10/12 EC3500.FallFY13/MPF - Section II 59
RP Example – Poisson (Counting) Random Process, cont’
• Arrival times in a Poisson Process
- The arrival time Xn , i.e., waiting time, of the nth event is defined as the sum of the first n inter-arrival times Zi each with rate 1/λ:
1 ...n nX Z Z= + +
-
Xn has a Gamma(n,λ) pdf equal to1
2
( )( ) , 0( 1)!
[ ] / , var( ) /
n
n
nx n
x n n
n n
xf x e xn
E x n x n
λ λλ
λ λ
−−= ≥
−
= =
Note: Using CLT results, Gamma pdf
tends towards Normal when n is large
11/10/12 EC3500.FallFY13/MPF - Section II 60
Using Poisson process pdf properties leads to:
RP Example – Poisson (Counting) Random Process, cont’
2
[ ( )] [ ( )]( , ) min( , ) ( , ) min( , )x x
E N t t Var N t tR t s t s ts C t s t s
λ λ
λ λ λ
• = • =
• = + • =
[ ] 1 0
1 0 1 0( )1 1 0 0
1 0
( ) has independent increments in non-overlapping time intervals
( ) [ ( ) | ( ) ] ,
( )!
k kt t
N t
t tP N t k N t k e
k kλ λ −
− −
•
−⇒ = = =
−
0 1 for k k≤
Where λ
is the rate of the process.
11/10/12 EC3500.FallFY13/MPF - Section II 61
RP Example – Poisson (Counting) Random Process properties
1 0
0 1 0 1
1 1 0 01 1 0 00
1( )
0
The joint probability of & occurences at times & , respectively is defined as:
[ ([ ( ) , ( ) ]
[ ( ) | (
) ]) ]
(t t
k k t t
P N t k N t P N t k N t k
te
P N tk k
λ λ− −
•
= = ×= == =
=[ ] [ ]
[ ] [ ]
[ ] [ ]
1 0
1 0
1 0
1
0
0
0
0
1
0
1 0
1 0
1
1
0
0 01 1 0 0 0 1
1 0
0( )
0
0
0
0
)
( )!
( )
!
( )[ ( ) ,
( ) ] =
,( )
( )!
!
=
!
!
k k
k
k
k
kt
kt
k kt
te
k
tk
t
tk k
t t
t
e
tP N t k N t k e k k
k k
k
k
k
λ
λ
λ
λ
λ
λ
λ
λ −
−
−
−
−
−
×
×
−= = ≤
−
−−
−−
11/10/12 EC3500.FallFY13/MPF - Section II 62
RP Example – Poisson (Counting) Random Process properties, cont’
•
Theorem: Let N1 (t) and N2 (t) be 2 independent Poisson processes with rates
λ1
and λ2 . The process N(t)=N1(t) + N2(t) is a Poisson process with rate
λ=λ1+λ2 .
Given that the N(t)=N1(t) + N2(t) process has a rate λ=λ1+λ2, the conditional probability that the arrival is from N1(t) is λ1/(λ1+λ2) .
11/10/12 EC3500.FallFY13/MPF - Section II 63
RP Example – Poisson (Counting) Random Process properties, cont’
•
Theorem –
(Bernoulli decomposition of a Poisson process: breaking down a process into 2 separate counting processes):
Let N(t) be a Poisson process with rate λ. Assume
that whenever the Poisson process N(t) has an arrival, we decide it is either a type 1 with probability p or type 2 arrival with probability (1-p).
This results in 2 independent Poisson processes with rates λp and λ(1-p). (See proof in Appendix A)
11/10/12 EC3500.FallFY13/MPF - Section II 64
Example 10A web server records webpage requests as a Poisson process of 10
hits/sec. Each request is either for-
an internal webpage with probability 0.7
-
an external webpage with probability 0.3.
1. Compute the joint PMF of internal and external requests over a 1mn interval?2. Compute the expected number of internal and external requests
over a
1mn interval.3. Compute the probability of observing more than 1 hit in the first 1millisecond
11/10/12 EC3500.FallFY13/MPF - Section II 65
11/10/12 EC3500.FallFY13/MPF - Section II 66
11/10/12 EC3500.FallFY13/MPF - Section II 67
Example 11Network packets arrive at a server according to a Poisson process at a rate equal to 20 packets per second. For some reasons, packets do not
get
processed until at least three packets have arrived at the server.
1) Identify the arrival rate λ.2) Compute the expected waiting time until the first packet is processed.3) Compute the probability that no packet is processed in a 1 mn
interval.
11/10/12 EC3500.FallFY13/MPF - Section II 68
11/10/12 EC3500.FallFY13/MPF - Section II 69
11/10/12 EC3500.FallFY13/MPF - Section II 70
Example 12
Specific rare independent signal anomalies you are interested in
tracking are detected at a Poisson rate of 1.2/day from a specific receiver.
1. Compute the expected waiting time until the 100th
detected anomaly ?2. Compute the probability that the elapsed time between the 100th
detected anomaly and the next detected one exceeds 2 days ?3. What is the probability that the 100th
anomaly will be detected after 75 days? (Note: Using the CLT may simplify computations)
11/10/12 EC3500.FallFY13/MPF - Section II 71
11/10/12 EC3500.FallFY13/MPF - Section II 72
11/10/12 EC3500.FallFY13/MPF - Section II 73
RP Example –Moving Average (MA) Random Process
0( ) ( ),
where ( ) is zero mean white noise
N
pp
x n a s n p
s n=
= −∑
• Compute mx (n), and Rx (n0 ,n1 )• Is x(n) wss?
11/10/12 EC3500.FallFY13/MPF - Section II 74
11/10/12 EC3500.FallFY13/MPF - Section II 75
Periodic Random Process Properties
• if x(t) is periodic, ( ) ( )x t x t T= +
• Mean
( ) ( ) ( ) ( )x xm t E x t E x t kT m t kT= = + = +⎡ ⎤ ⎡ ⎤⎣ ⎦ ⎣ ⎦• Correlation/Covariance for a wss RP
( ) ( )1 2 1 2,x xR t t R t t= −
( ) ( )( ) ( )
x x
x x
R t R t kT
C t C t kT
= +
= +
11/10/12 EC3500.FallFY13/MPF - Section II 76
• Example 13 x(t) = A exp (j(ωt + θ)), θ ~ U [0,2π], A real
Compute Rx (τ) & mx (t)
11/10/12 EC3500.FallFY13/MPF - Section II 77
11/10/12 EC3500.FallFY13/MPF - Section II 78
Example 14 y(t)=s(t)+w(t), where s(t)=A exp (j(ωt + θ)), θ ~ U [0,2π], w(t) zero-mean white wss noise, w(t) & s(t) are independent.
Compute Ry (τ) and my (t)
11/10/12 EC3500.FallFY13/MPF - Section II 79
11/10/12 EC3500.FallFY13/MPF - Section II 80
Example 15 y(t)=A cos ((ωt + θ)), θ ~ U [0,2π], Take advantage of results shown earlier to compute Ry (τ)
and my (t)
11/10/12 EC3500.FallFY13/MPF - Section II 81
11/10/12 EC3500.FallFY13/MPF - Section II 82
-
A RP is said to be uncorrelated if
Uncorrelated Random Process
{ }{ }
*1 2 1 1 2 2
2 21 1 1 1 2
1 2
21 1 2
*1 2 1 2 1 2
21
( , ) ( ( ) ( ))( ( ) ( ))
| ( ) ( ) | ( ),
0,
( ) ( )or equivalently
( , ) ( , ) ( ) ( )
= ( ) (
x x x
x x
x
x x x x
x
C t t E x t m t x t m t
E x t m t t t t
t t
t t t
R t t C t t m t m t
t
σ
σ δ
σ δ
= − −
⎧ − = =⎪= ⎨≠⎪⎩
= −
= +*
1 2 1 2) ( ) ( )x xt t m t m t− +
11/10/12 EC3500.FallFY13/MPF - Section II 83
-
A RP is said to be wide-sense (ws) cyclostationary if ∃
T such that
mx (t) = mx (t + T), ∀
t Rx (t1
, t2
) = Rx (t1
+ T, t2
+ T)
Examples of a w.s. cyclostationary process:
* DSB-AM signal
x(t) = A(t) cos
(ω0
t) A(t) = stationary RP
ω0
= constant
Cyclostationary Process
Signal statistics vary periodically with time
Leads to correlation
between areas of the signal
spectrum
Note: signal itself NOT necessarily
periodic
* OFDM signals
11/10/12 EC3500.FallFY13/MPF - Section II 84
Cyclostationary properties, cont’
•
Cyclostationary property taken advantage of in cognitive radio (CR) detection applications:
-
pilot symbols used in OFDM applications exhibit periodic behaviors resulting in cyclostationary signal behavior
-
noise usually doesn’t exhibit periodic behavior-
difference between signal/noise behavior taken advantage of
to extract OFDM signal characteristics.
11/10/12 EC3500.FallFY13/MPF - Section II 85
•
Two RPs
x(t) and y(t) are said to be statistically independent (of each other) if for all values t1 and t2 :
•
Or equivalently that for each choice of t1 and t2 , the RVs x(t1 ) and x(t2 ) are independent.
•
Two RPs
x(t) and y(t) are said to be uncorrelated if for all values t1 and t2
• 2 RPs
x(t) and y(t) are independent of each other uncorrelated
Multiple Random Processes Joint Properties
{ } { } { }
{ }
* *1 2 1 2 1 2
*1 2 1 1 2 2
( , ) ( ) ( ) ( ) ( )
which is equivalent to:
( , ) ( ( ) ( ))( ( ) ( )) 0
xy
xy x y
R t t E x t y t E x t E y t
C t t E x t m t y t m t
= =
= − − =
1 2 1 2( , , , ) ( , ) ( , )xy x yf x y t t f x t f y t=
11/10/12 EC3500.FallFY13/MPF - Section II 86
•
Two RPs
x(t) and y(t) are said to be jointly Gaussian RPs
if for any choice of ti and si , the random vectors [x(t1 ), …x(tn )] and [y(s1 ), …y(sn )] are jointly Gaussian.
• If x(t) and y(t) are jointly Gaussian and uncorrelated RPs
independent
Multiple Random Processes Joint Properties, cont’
11/10/12 EC3500.FallFY13/MPF - Section II 87
Example 16 Let x(t) and y(t) be 2 RPs generated as:
x(t) = αt & y(t)=α2t, where α~N(0,1)
a. Find the mean mx (t) and my (t)
b. Are x(t) & y(t) wss ?
c. Are x(t) & y(t) uncorrelated ? Independent?
11/10/12 EC3500.FallFY13/MPF - Section II 88
11/10/12 EC3500.FallFY13/MPF - Section II 89
11/10/12 EC3500.FallFY13/MPF - Section II 90
Investigate changes in mean or variance: -
If changes occur process is not wssHow do we decide there is a change?
(visually or via statistical tests)-
Visually
-
Statistical tests: Two-Sample tests for equal means and equal variances over small non overlapping data blocks (independence bet. samples required for the tests)
Discrete case: Data Analysis Application –How can we assess whether data is wss?
1) Consider the environment that produced it,2) Check whether basic properties of the signal change with time
or not.
Compute and track changes in mx (t) & varx (t)
11/10/12 EC3500.FallFY13/MPF - Section II 91
wss data example Non-wss data example
11/10/12 EC3500.FallFY13/MPF - Section II 92
- Tests can be implemented over short-time windows in MATLAB using
•
ttest2.m (use the t-distribution) •
vartest2.m (use the F-distribution)
-
Requires the selection of a level of significance α
(usually picked around 5 to 10%)
-
Tests sensitive to block lengths (useful only when data set is large enough…)
Models how likely the user is to decide that two successive data blocks have different estimated means (for ttest2.m) or variances (for vartest2.m) when they are actually equal
11/10/12 EC3500.FallFY13/MPF - Section II 93
Example 17: You collected data from 2 thermal sensors X and Y. The data collected for each is contained in the matrix DATA=[X,Y].Can each data be considered wss? ( Pack2Data1.mat)
MATLAB hint: The Matlab function Pack2Example16Template.m provides you with a shell code to compute short-term statistics defined over overlapping data segments. Use if you find useful.
11/10/12 EC3500.FallFY13/MPF - Section II 94
Data Analysis Application –How do we know if the I.I.D assumption is valid ?
1) Inspect the autocorrelation plot
EXAMPLE [Ref 6, Ex. 2.18]: CPU DATA. Execution times for n = 7632 consecutive requests are measured and displayed on the upper left panel. Initial testing indicates the data appears stationary and roughly normal so the autocorrelation function can be used to test independence.
•
Use the normalized correlation The plot on the lower panel shows a strong correlation. data is not independent
•
Assume you are interested in extracting a IID sequence out of this data. How would you do so? Try sub-sampling (see later slides)
11/10/12 EC3500.FallFY13/MPF - Section II 95
( )x kρ
Datax(n)
11/10/12 EC3500.FallFY13/MPF - Section II 96
How do we know the iid assumption is valid ? Cont’
2)
Inspect the “lag plot”
Def: plot x(n) versus x(n+lag) for different values of “lag”
lag plot checks whether a data set or time series is independent or not. Random data does not exhibit any identifiable structure in the lag plot. Non-random structure in the lag plot indicates that the underlying data may be correlated (i.e., not independent).
[Ref 8]
11/10/12 EC3500.FallFY13/MPF - Section II 97
Example 18: Evaluate the “lagplot” obtained for a random walk sequence. Is this an IID process?
11/10/12 EC3500.FallFY13/MPF - Section II 98
Data Analysis Application –How do we extract an IID sequence from a non IID sequence?
Try sub-sampling ….
EXAMPLE [Ref 6, Ex. 2.18]: CPU DATA. Execution times for n = 7632 consecutive requests are measured and displayed on the upper left panel. Initial testing indicates the data appears stationary and roughly normal so the autocorrelation function can be used to test independence.
•
Use the normalized correlation The plot on the lower left panel shows a strong correlation. data is not independent
•
Assume you are interested in extracting a IID sequence out of this data. How would you do so? Try sub-sampling
11/10/12 EC3500.FallFY13/MPF - Section II 99
( )x kρ
Datax(n)
• Random sub-sampling example, [Ref. 6]
All ρx (k)’s “small”
enough!
11/10/12 EC3500.FallFY13/MPF - Section II 100
How do we extract an I.I.D. sequence? cont’
How is sub-sampling implemented?
-
Basic N-level sub-sampling may be implemented by picking every Nth sample. However, this may result in aliasing in some cases (why is that? hint: think about what decimating does to the signal in the frequency domain see plots next page)
-
A better approach introduces randomness in the picking task. The sub-sampled data is obtained following the random
sub-
sampling scheme as follows. For every index i = 1...n, decide with probability p = 1/2 whether the point is kept. This gives the second plot on the figure. Then repeat the process. This gives sub-sampled data with p = 1/2 to 1/27 = 1/128.
11/10/12 EC3500.FallFY13/MPF - Section II 101
• Comparisons between deterministic/random sub-sampling
11/10/12 EC3500.FallFY13/MPF - Section II 102
• Comparisons between deterministic/random sub-sampling, cont’
11/10/12 EC3500.FallFY13/MPF - Section II 103
Is sub-sampling always the solution to removing correlation ? may be not![Ex 2.19, Ref 6] shows the number of bytes transferred over an Ethernet LAN, (360,000 points) Illustrates long range dependent data
Correlation coefficients never get small enough.
Concept of Confidence Interval is used to evaluate coefficient sizes, see Appendix D for details
11/10/12 EC3500.FallFY13/MPF - Section II 104
Example 19: You are given measurements collected by sensors xand y. ( Pack2Data3.mat)
1)
Using the correlation function xcorr.m, evaluate whether the measurements obtained for each sensor are correlated or not.Hint: 1) use [xcor,lags]=xcorr(x,maxlag,’coeff’); this will insure you can plot the range of correlation coefficients for the proper range of correlation lags; 2) use a relatively small number of lags, around 50 or less to start so that you can see what happens around lag 0.
1)
Using the matlab
function lagplot.m plot lagplots
of various lags for x & y, evaluate whether the measurements obtained for x are correlated or not. Repeat for the measurements contained in y.
2)
Can you estimate the maximum lag at which the data is correlated for each sensor ?
3)
Assume you want to generate an IID sequence out of y. Explain a) how you can generate such a sequence, and b) how you check that the data extracted of out y is IID; c) Implement.
MATLAB hint: The Matlab function Pack2Example19Template.m provides you with a shell code to compute a random sub-sampled sequence by a factor of 2. Use if you find useful.
11/10/12 EC3500.FallFY13/MPF - Section II 105
Target
assume y(t) = x(t-t0 )
Application - Radar Target DetectionCross-correlation application
( ) ( ) ( ){ }*yxR E y t x tτ τ= − =
11/10/12 EC3500.FallFY13/MPF - Section II 106
assume y(t) = x(t-T)
( ) ( ) ( ){ }*xyR E x t y tτ τ= − =
11/10/12 EC3500.FallFY13/MPF - Section II 107
Brief introduction to the Sliding Window FT (spectrogram)
FT
FT
FT
FT
Usually window incremented by a fraction (25 to 75%).
w[m]
Time
Freq
uenc
y
11/10/12 EC3500.FallFY13/MPF - Section II 108
Linear Chirp –
Time domain
Linear Chirp –
Spectrogram
Nor
mal
ized
Fr
eque
ncy,
f s=2
Spectrogram, cont’
11/10/12 EC3500.FallFY13/MPF - Section II 109
Low noise level
High noise level
11/10/12 EC3500.FallFY13/MPF - Section II 110
Example 20: Assume you send the chirp signal x(t). You turn your receiver on at the time you send x(t) and leave it on until you receive y(t).Assume the sampling frequency to be equal to 1Hz.
You have two scenarios to investigate: high and low SNR received
signals obtained by sending x(t). The received signal in the high SNR case is yhigh(t). The received signal in the low SNR case is ylow(t).
-
Plot the spectrograms for x(t), yhigh(t), and hlow(t) using the matlab
function
spectrogram.m (please read the help on spectrogram.m before you use it) •
A good set of starting values for the spectrogram are: window length 32; overlap 16, nfft=2048.•
Use Fs=1 and the ‘yaxis’
option so that the spectrogram is plotted with the time axis as the x-axis.
-
Compute and plot the cross-correlation sequences between sent and received signals.
-
Estimate the target distance in number of samples obtained for both cases( x, ylow & yhigh in Pack2Data2.mat)
11/10/12 EC3500.FallFY13/MPF - Section II 111
Application: Gas furnace reaction time –cross-covariance/cross-correlation function application
Example: x1 (t) represents a furnace input gas feed rate for a gas furnacex2 (t) represents the output CO2
concentration
Goal: Evaluate how fast the furnace responds to changes in the gas feed rate
[Box-Jenkins data]
11/10/12 EC3500.FallFY13/MPF - Section II 112EC3410.WFY10/MPF -
Section II
2 1
2 1
2 1
( )Compute: ( ) x x
x xx x
C kkρ
σ σ=
Min at lag=5
Question: What is the significance of the minimum ?
11/10/12 EC3500.FallFY13/MPF - Section II 113
11/10/12 EC3500.FallFY13/MPF - Section II 114
Property: if the process x(t) is a-periodic and zero-mean, then
Application: Detection of the signal periodicity in Noisy environments
Example 21:Assume we have a sinusoidal signal x[n] with uniform random phase φ imbedded in wss zero-mean white noise w[n] with variance σ2
(signal and noise uncorrelated).
The correlation sequence may be used to get information on the properties of the periodic signal
lim ( ) 0xRτ
τ→ ∞
=
Period N=?
11/10/12 EC3500.FallFY13/MPF - Section II 115
Example 22: You are given measurements collected from 2 underwater sensors y1 and y2. ( Pack2Data4.mat)Assume the sampling frequency is equal to 1Hz. 1) Evaluate whether you can extract periodicity information on y1 and y2 using correlation information.
2) Compute the frequency information from y1 and y2 by computing the spectral estimates, on a dB scale, and derive periodicity information on y1 and y2 .
3) Compare the information obtained with both approaches. List advantages/limitations of both approaches.
2 210 1 10 210 log ( ( ) ) &10log ( ( ) )Y f Y f
11/10/12 EC3500.FallFY13/MPF - Section II 116
• Quality of estimate? → find mean and variance of
How to compute correlation estimates
• For discrete data: x = [x(0),
,
x(N-1 )]T
( )ˆxR k
( ) ( ) ( ) ( )1
0
1ˆ ˆN k
x xi
sR R x i x i kN k
kTk− −
=
= = +− ∑
( ) ( ) ( )1
0
1ˆ(1) N k
xi
E R k E x i x i kN k
− −
=
⎡ ⎤⎡ ⎤ = +⎢ ⎥⎣ ⎦ −⎣ ⎦∑
( ) 0Assume knownfrom 0 ,& ergodic (why?)x t t t T= → =
in secondsLag: dimension less
11/10/12 EC3500.FallFY13/MPF - Section II 117
How to compute correlation estimates, cont’
( )[ ]
( ) ( ) ( )22
ˆ(2) Var
when
x x x xi
NR k R i R i k R i kN k
N k
∞
=−∞
⎡ ⎤ ⎡ ⎤≅ + + −⎣ ⎦⎣ ⎦ −
>>
∑
11/10/12 EC3500.FallFY13/MPF - Section II 118
( ) ( ) ( ) ( )1
0
1 N k
x x si
R k R kT x i x i kN
− −
=
= = +∑
( )(1) xE R k⎡ ⎤ =⎣ ⎦
( ) ( ) ( ) ( )21(2) Var
0
x x x xR k R i R i k R i kN
k
∞
−∞
⎡ ⎤ ⎡ ⎤≅ + + −⎣ ⎦⎣ ⎦
>
∑
Alternate Estimator: Biased Estimator
Quality of estimate:
How to compute correlation estimates, cont’
11/10/12 EC3500.FallFY13/MPF - Section II 119
( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( )[ ]
( )
( ) ( )
( ) ( ) ( ) ( )
( )
1 1
0 0
2
1 1ˆ
ˆ
1 ˆVar Var
whenˆVar Constant Var
1 ˆ
when
Var
b
N k N k
x xi i
x x x
x x x
x x x
x
R k x i x i k R k x i x i kN N k
N kE R k R k E R k R kN
NR k R R k RN N k
k N
R k R k
E R k R k E R k R kN
N
R k
− − − −
= =
= + = +−
− ⎡ ⎤⎡ ⎤ = =⎣ ⎦ ⎣ ⎦
⎡ ⎤⎡ ⎤ = =⎣ ⎦ ⎣ ⎦ −
→
⎡ ⎤⎡ ⎤ → < ∞ + ∞⎣ ⎦ ⎣ ⎦
⎡ ⎤⎡ ⎤ = →⎣ ⎦ ⎣ ⎦
→ + ∞
⎡ ⎤⎣ ⎦
∑ ∑
L L
( )ias of xE R k⎡ ⎤⎣ ⎦
Biased Estimator
Unbiased Estimator
Biased/unbiased discrete correlation estimator summary
11/10/12 EC3500.FallFY13/MPF - Section II 120
11/10/12 EC3500.FallFY13/MPF - Section II 121
Example 23: Comparing theoretical and Estimated correlation sequences
Assume that you are given a wss ergodic RP generated as:s(n)=0.3s(n-1)+v(n), where v(n) is Gaussian zero-mean white noise ~N(0,1)1.Compute the theoretical correlation expression Rs (k)2. Assume that you have N=100 data points available for s(n), n=0,…99.
Compute the estimated correlation values,
and compare with theoretical values.3. Repeat 2), assuming you have N=10000 data points available.
( ), 0,...10sR k k =
11/10/12 EC3500.FallFY13/MPF - Section II 122
11/10/12 EC3500.FallFY13/MPF - Section II 123
11/10/12 EC3500.FallFY13/MPF - Section II 124
11/10/12 EC3500.FallFY13/MPF - Section II 125
Frequency Domain Description of wss Processes
• Power spectral density (PSD)
( ) ( ) ( )
( ) ( ) ( )
2
2
j fx T x x
j fx x x
S f F R R e d
R IFT S f S f e df
π τ
π τ
τ τ τ
τ
+∞−
−∞
+∞
−∞
= =⎡ ⎤⎣ ⎦
= =⎡ ⎤⎣ ⎦
∫
∫
• Continuous valued RP
( ) ( ) ( ) ( ), x T x x xS f F R R IFT S fτ τ= =⎡ ⎤ ⎡ ⎤⎣ ⎦ ⎣ ⎦
Wiener-Khinchin relations
11/10/12 EC3500.FallFY13/MPF - Section II 126
Frequency Domain Description of Stationary Processes, cont’
( ) ( ) ( )
( ) ( ) ( )
2
2
1
j kx T x x
j kx x x
S F R k R k e
R k IFT S S e d
φπ
πφ
φ
φ φ φ
−= =⎡ ⎤⎣ ⎦
= =⎡ ⎤⎣ ⎦
∑∫
• Discrete valued RP
“Normalized”digital frequency
defined for 1/ 2 1/ 2φ− ≤ ≤
Covers range [0,1] or [-
1/2,1/2], etc…
11/10/12 EC3500.FallFY13/MPF - Section II 127
PSD has three key properties:
• The PSD Sx (f)
is real and ≥0
• If x(t) is real then Sx (f) is even
• The area under Sx (f) equals the energy of x(t)
• For discrete valued RPs-
Same properties hold
-
In addition, the PSD Sx (φ) for the discrete RP is periodic with period equal to 1. see proof in Appendix E
11/10/12 EC3500.FallFY13/MPF - Section II 128
( ) 2 ( )xR τ σ δ τ=
Example 24: White noise find the PSD of white noise x(t) with
11/10/12 EC3500.FallFY13/MPF - Section II 129
( )2
0, | |0,x
f fS f
owσ⎧ <
= ⎨⎩
Example 25: Bandlimited white noise find the correlation function of bandlimited lowpass white noise
noise
x(t) with
11/10/12 EC3500.FallFY13/MPF - Section II 130
( )(1 | | / ),
0, owx
A T T TR
τ ττ
− − ≤ ≤⎧= ⎨
⎩
Example 26: find the PSD of zero-mean w.s.s. x(t) with
11/10/12 EC3500.FallFY13/MPF - Section II 131
11/10/12 EC3500.FallFY13/MPF - Section II 132
Example 27: Narrowband random processfind the PSD of x(t). Assume s(t) is zero-mean w.s.s, θ~U[0,2π[, s(t)
and θ are independent.
0( ) ( ) cos( )x t s t tω θ= +
11/10/12 EC3500.FallFY13/MPF - Section II 133
11/10/12 EC3500.FallFY13/MPF - Section II 134
11/10/12 EC3500.FallFY13/MPF - Section II 135
Cross Spectral Density of Stationary Processes
• Power spectral density (PSD) –
Continuous valued RP
( ) ( ) ( )
( ) ( ) ( )
2
2
Note: Cross-PSD may be complex
jfxy T xy xy
jfxy xy xy
S f F R R e d
R IFT S f S f e df
π τ
π τ
τ τ τ
τ
+∞−
−∞
+∞
−∞
⎡ ⎤= =⎣ ⎦
⎡ ⎤= =⎣ ⎦
∫
∫
11/10/12 EC3500.FallFY13/MPF - Section II 136
( ) ( ) ( )
( ) ( ) ( )
2
1/2 2
1/2
j kxy T xy xy
j kxy xy xy
S F R k R k e
R k IFT S S e d
π φ
π φ
φ
φ φ φ
−
−
⎡ ⎤= =⎣ ⎦
⎡ ⎤= =⎣ ⎦
∑∫
• Discrete valued RP
( ) ( ) ( ) ( ),
Note: Cross PSD may be complex xy T xy xy xyS F R k R k IFT Sφ φ⎡ ⎤ ⎡ ⎤= =⎣ ⎦ ⎣ ⎦
Cross Spectral Density of Stationary Processes, cont’
Range equal to 1
11/10/12 EC3500.FallFY13/MPF - Section II 137
Cross- PSD properties:
*( ) ( )xy yxS f S f= −i
• Re[Sxy (f)] is an even function
• Imag[Sxy (f)] is an odd function
• If x(t) and y(t) are uncorrelated RPs
and have constant mean values
( ) ( ) ( )xy yx x yS f S f m m fδ= =
• For discrete valued wss RPs-
Same properties hold
-
In addition, the cross-PSD Sxy (φ) is periodic with period equal to 1
For continuous wss RPs:
11/10/12 EC3500.FallFY13/MPF - Section II 138
Power Spectral Density for wss Processes
2x
-
2x T
For a deterministic signal ( )
signal energy: E | ( ) |
1average power: P = lim | ( ) |2
T
T
x t
x t dt
x t dtT
+∞
∞
→∞−
• =
•
∫
∫
11/10/12 EC3500.FallFY13/MPF - Section II 139
Power Spectral Density for wss Processes, cont’
2
-
2T
For a random signal ( )
| ( ) |
1 lim | ( ) |2
T
T
x t
x t dt
x t dtT
+∞
∞
→∞−
•
•
∫
∫
Random quantities!!(unless process is ergodic)
Need another definition to represent
energy / power
11/10/12 EC3500.FallFY13/MPF - Section II 140
Power Spectral Density for wss Processes, cont’
2x
-
2x T
For a random signal ( )
E | ( ) |
1 P = lim | ( ) |2
T
T
E
E
x t
x t dt
x t dtT
+∞
∞
→∞−
=⎡ ⎤⎢ ⎥⎣ ⎦
⎡ ⎤⎢ ⎥⎣ ⎦
∫
∫
Consider average values for random signals
Expected energy
Expected average power
11/10/12 EC3500.FallFY13/MPF - Section II 141
2x T
2T
For a random signal ( ), the expected average power
1 P = lim | ( ) |2
1
ws
= lim | ( ) |2
s
T
T
T
T
E
x t
x t dtT
E x t dtT
→∞−
→∞−
⎡ ⎤⎢ ⎥⎣ ⎦
•
⎡ ⎤⎣ ⎦
∫
∫
11/10/12 EC3500.FallFY13/MPF - Section II 142
22T
Thus for a random signal ( ),
=
=
wss
(0) | ( ) |
expected instantaneous p
1lim | ( ) |2
Expected average owerpower
x
T
TxE x t dt
TR x
x t
E tP →∞−
⎡ ⎤=
•
⇒
⎢ ⎥ ⎡ ⎤= ⎦⎦
⎣⎣
∫
2| 0
2| 0
-
-
Note: can also be computed using the PSD{| ( ) | } (0) [ ( )]
= ( )
= ( )
x x x
jfx
x x
PowerP E x t R IFT S f
S f e df
P S f df
τ
π ττ
=
+∞
=∞
+∞
∞
•
= = =
⇒
∫
∫
11/10/12 EC3500.FallFY13/MPF - Section II 143
2x
-
For a random signal ( ), the expected energy
E | ( ) |
is infinit
w
.
s
e
s x t
x t dtE+∞
∞
⎡ ⎤⎢ ⎥
•
⎣ ⎦= ∫
Proof:
11/10/12 EC3500.FallFY13/MPF - Section II 144
Summary of Properties for Stationary x(t) ( same properties hold for x(n) )
Mean
Correlation
Covariance
Cross-Correlation
Cross-Covariance
PSD / Cross-PSD
Inter-relations( ) ( ) 2
x x xC R mτ τ= − ( ) ( ) *xy xy x yC R m mτ τ= −
Properties
Autocorrelation PSD
( ) ( )( ) ( )( ){ }*x x xC E x t m x t mτ τ= − − −
( ){ } xE x t m=
( ) ( ) ( ){ }*xR E x t x tτ τ= −
( ) ( ) ( ){ }*x yR E x t y tτ τ= −
( ) ( )( ) ( )( ){ }*x y x xC E x t m y t mτ τ= − − −
( ) ( ){ }x xS f F T R t=
( ) { ( ) }x y x yS f F T R τ=
( ) ( )( ) ( )
*
0 ,x x
x x
R R
R R
τ τ
τ τ
=
≥ ∀
( )( ) ( ) ( )
0
, for real valued x
x x
S f
S f S f x t
≥
= −For discrete signals: ( ) is periodic with period 1xS φ
11/10/12 EC3500.FallFY13/MPF - Section II 145
Properties
Cross-correlation Cross-PSD( ) ( )( ) ( ) ( )
( ) ( )
( )
*
0 0
1
x y y x
x y x y
x yx y
x y
x y
R R
R R R
C
τ τ
τ
τρ τ
σ σ
ρ τ
=
≤
=
≤
( ) [ ( )]xy xyS f FT R τ=
11/10/12 EC3500.FallFY13/MPF - Section II 146
Section II – Random Signals
Appendices
11/10/12 EC3500.FallFY13/MPF - Section II 147
Section II – Random Signals Appendix A
Binary Signal Example
11/10/12 EC3500.FallFY13/MPF - Section II 148
11/10/12 EC3500.FallFY13/MPF - Section II 149
11/10/12 EC3500.FallFY13/MPF - Section II 150
11/10/12 EC3500.FallFY13/MPF - Section II 151
11/10/12 EC3500.FallFY13/MPF - Section II 152
Section II – Random Signals Appendix B
Theorem Proof – Poisson processes
11/10/12 EC3500.FallFY13/MPF - Section II 153
1 1 1 1 2 2
1 2
12 2 2[ ( ) , ( ) ] [ (
[) ]
( ) , ( ) | ( ) ]P N t n N tP N t n n
n NP N t n N t n nn t= = =× =
++
= = =
•
Theorem –
(Bernoulli decomposition of a Poisson process: breaking down a process into 2 counting processes): The counting processes N1 (t) & N2 (t) derived from a Bernoulli decomposition of the Poisson process N(t) are independent Poisson processes with rates λp and λ(1-p).
Term can be viewed as N(t)=n1 +n2 Bernoulli trials with probability of success p, and failure (1-p)
1 21 2
1
(1 )n nn np p
n+⎛ ⎞
⇒ −⎜ ⎟⎝ ⎠
1 2
1 2
( )( )!
n nT Te
n nλ λ +
−
+
11/10/12 EC3500.FallFY13/MPF - Section II 154
1 1 1 1 2 2
1 2
12 2 2[ ( ) , ( ) ] [ (
[) ]
( ) , ( ) | ( ) ]P N t n N tP N t n n
n NP N t n N t n nn t= = =× =
++
= = =
Term can be viewed as N(t)=n1 +n2 Bernoulli trials with probability of success p, and failure (1-p)
1 21 2
1
(1 )n nn np p
n+⎛ ⎞
⇒ −⎜ ⎟⎝ ⎠
1 2
1 2
( )( )!
n nT Te
n nλ λ +
−
+
1 2
1 2
( )!(
n nn n
++
=2n−
1
1
21 2
2
2
2
1
1
(1 )1 2
1 2 1 2
(1 )
1 2
1 22
( )! ( )(1 )! ! ( )!
( ) [
( )( )!
(1 )
(1 )
) !
]! !
!n n
n n Tp
n nT
T p
n n
n n
Tp T p
n n Tp p e en n n n
Tp T pe e
Ten nn
n n
p p
λ
λ
λ
λ
λ
λ
λ
λ λ
+−
−
−
− −
+
− −
+=
−+
−+
−=
11/10/12 EC3500.FallFY13/MPF - Section II 155
Which leads to the marginal distribution:
1 2
2
1
(1 )1 1
01 2
1
( ) [ (1 )][ ( ) ]! !
( ) = !
n nTp T p
n
nTp
Tp T pP N t n e en n
pTen
λ λ
λ
λ λ
λ
∞− − −
=
−
−= = ∑
Following the same type of derivation for N2
(t) leads to
2(1 )
2 22
( (1 ) )[ ( ) ]!
np T p TP N t n e
nλ λ− − −
= =
Which shows that the Poisson rates for N1 (t) and N2 (t) are λp and λ(1-p) respectively.
11/10/12 EC3500.FallFY13/MPF - Section II 156
Section II – Random Signals Appendix C
PDF of inter-arrival times for a Poisson Process
11/10/12 EC3500.FallFY13/MPF - Section II 157
From wikipedia
11/10/12 EC3500.FallFY13/MPF - Section II 158
Section II – Random Signals Appendix D
Concept of Confidence Interval applied to the evaluation of an IID sequence
11/10/12 EC3500.FallFY13/MPF - Section II 159159
•
In most analyses, it is useful to characterize signal/process properties via a small number of parameters
•
Moments (mean, variance, etc.) are routinely estimated when dealing with data.
•
In most simulations, we obtain data (parameters) with some variability due to randomness in the measurements.
• Estimated parameters are usually not identical to true parameters.
Introduction to Confidence Intervals
Questions:
(1)
How do we decide how meaningful the estimated parameters are ?
(2)
How do we quantify the uncertainty about a parameter due to randomness in the measurements ?
(3)
How can we evaluate the confidence we have in the estimates ?
Issues can be addressed via “confidence intervals”
11/10/12 EC3500.FallFY13/MPF - Section II 160160
What are Confidence Intervals (CI’s) ?
•
…CI quantifies the uncertainty about a data parameter (mean, median, st. dev., etc…) due to the randomness of the measurements... [5]
•
CI gives a framework to establish a range in which to allow the estimate to vary from its “true”
value, before accepting/rejecting a claim that a parameter has a given true value, based on the estimated value computed using a finite number of samples or experiments.
•
CIs are useful when comparing different estimated values. For example, the mean of an estimated parameter alone is usually not sufficient to characterize how good a result is without knowing the possible level of variation around it. CIs
are used to indicate estimate reliability, i.e., give a picture
of how sure one can be of the findings.
11/10/12 EC3500.FallFY13/MPF - Section II 161161
What are Confidence Intervals (CI’s)?, cont’If you repeat an experiment 100 times, the results obtained for the true parameter of interest is expected to fall within the 95%CI 95% of the time, or 95 times (for a 95% CI). In this case .95 is defined as 1-α, where α
is called the level of significance.
100 experiments computing the sample mean of 20 iid
points with pdf ~N(50,10)
[Ref: http://www.ruf.rice.edu/~lane/stat_sim/conf_interval ]
95% confidence interval which includes the true mean value 5095% confidence interval which does not include true mean value 50
99% confidence interval which includes the true mean value 5099% confidence interval which does not include the true mean value 50
11/10/12 EC3500.FallFY13/MPF - Section II 162162
• CIs allow the user to quantify the accuracy of an estimated parameter.For example:
o
Based on 100 independent trials, the averaged mean for an experiment is found to be 1.2, with a 95% CI equal to [0.5 1.9].
o
Based on 100 independent trials, the averaged mean for an experiment is found to be 1.2 with a 95% CI equal to [0.9 1.5].
Does one option give us more reliable information?One is likely to prefer tighter CI bounds, as this reflects less variations
around the estimated mean value.
11/10/12 EC3500.FallFY13/MPF - Section II 163163
•
CIs are useful in assessing whether results obtained from different experiments are comparable or not.For example:
o
Based on 1000 independent trials, classifier 1 leads to an average classification rate equal to 85% with a 95% CI = [80% 90%].o
Based on 1000 independent trials, classifier 2 leads to an average classification rate equal to 87% with a 95% CI=[77% 97%].
Can one classifier be considered “better” than the other? This brings up the concept of “significantly different”. Some
performances may not be different enough to be considered “significantly different”. An example of such would be in cases where examined performances CIs ranges clearly overlap.
For the example mentioned above, one is likely to prefer a process which again has a tighter bound, so even though the average for classifier 2 is lower than the average value for classifier 1, the lower CI bound for classifier 1 are higher than that obtained for classifier 2.
When to include CIs ?CIs
should always be included when producing estimated averages, proportions, incident rates, etc…
11/10/12 EC3500.FallFY13/MPF - Section II 164
•
Using ρx (k) to evaluate whether a sequence is uncorrelated instead of Rx (k) makes a lot of sense as
ρx (k) is bounded in the known range 0 to 1, which makes it easier to evaluate what a “small”
or “negligible”
value really means magnitude wise.
•
Question: when can ρx (k) be considered equal to 0? use the concept of confidence interval.
•
When a sequence has normal pdf, evaluating that it is IID only requires to evaluate whether it is correlated or not.
• Recall a sequence is said to be uncorrelated if Rx (k)=0 for k≠0.
0
It turns out that when the sequence ( ) of length N is iid with pdf~N(0,1), ( ) (0,1/ )
( ) is considered equal to 0 (at the 95% confidence lev
x k
x
x n
k N N
k
ρ
ρ
≠
⇓
∼
el) when
| ( ) | 1.96 /x k Nρ <
11/10/12 EC3500.FallFY13/MPF - Section II 165
Specific details on Confidence intervals may be found at :•
Class notes for EC3410 – Discrete Time Random Signals, Section I, M. P. Fargues, 2011.
• Probability and Statistics for Engineers, 6th ed., Johnson, Prentice-Hall
•Applied Statistics for Engineers and Scientists, Petrucelli
et al, Prentice-
Hall.
11/10/12 EC3500.FallFY13/MPF - Section II 166
Section II – Random Signals Appendix E
Proof that the PSD obtained for a discrete signal is periodic with a period
equal to 1
11/10/12 EC3500.FallFY13/MPF - Section II 167
PropertyLet x(n) be a stationary RP, then Sx (φ) is periodic with period equal to 1
Proof -
Simplest case:
( ) ( ) ( )020
jf kx x
l
R k e S fπ φ δ φ= ↔ = − −∑
( ) ( )We can prove x T xS F R kφ = ⎡ ⎤⎣ ⎦
( ) ( )by proving x xR k IFT S φ= ⎡ ⎤⎣ ⎦
11/10/12 EC3500.FallFY13/MPF - Section II 168
( ) ( )
( )
( )
0
0
2
1
201
201
inside [0,1] only1" "ispresent2 0
2 2
j kx x
j k
j k
f for
j k j k
f f
IFT S S e d
f e d
f e d
e e
π φ
π φ
π φ
φ π
π φ π φ
φ φ φ
δ φ φ
δ φ φ
= + =
=
=⎡ ⎤⎣ ⎦
= − −
= − −
= =
∫∑∫
∑ ∫
( ) ( )We prove x T xS F R kφ = ⎡ ⎤⎣ ⎦ ( ) ( )by proving x xR k IFT S φ= ⎡ ⎤⎣ ⎦
11/10/12 EC3500.FallFY13/MPF - Section II 169
( ) 2If ijf kx i
iR k a e π=∑
( ) ( )x i ii
S a f fφ δ⎧ ⎫= − −⎨ ⎬⎩ ⎭
∑ ∑
11/10/12 EC3500.FallFY13/MPF - Section II 170
[1] Probability and Stochastic Processes, R. Yates & D. Goodman, 2nd
ed. Wiley, 2005[2] Probability, Random Variables and Random Signal Principles, 4th
Ed., P. Peebles, McGraw-Hill, 2001.[3] Probability and Random Processes, V. Krishan, Wiley Interscience, 2006.[4] Probability and Random Processes for Electrical and Computer Engineers, J. Gubner, Cambridge Press, 2008. [4] Discrete Random Signals and Statistical Signal Processing, C. Therrien, Prentice Hall, 1992.[5] Statistical and Adaptive Signal Processing, D. Manolakis, V. Ingle & S. Kogon, Artech
House, 2005.[6] Performance Evaluation of Computer and Communication Systems, J-Y. Le Boudec, epfl, http://perfevalepfl.ch/lectureNotes.htm
[7] Foundation Course on Probability, Random Variable and Random Processes, W. Cham,
course notes.
Section II – Random SignalsReferences