Lecture 19: Sums of RV’s, Sample Mean, Laws of...

Preview:

Citation preview

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 19: Sums of RVLecture 19: Sums of RV’’s, Sample s, Sample Mean, Laws of Large NumbersMean, Laws of Large Numbers

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 20: Central Limit TheoremLecture 20: Central Limit Theorem

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 21: Hypothesis Testing 1Lecture 21: Hypothesis Testing 1

Confidence Interval

Confidence interval on mean, variance known If: random sample of size n: X1, …, Xn

Xi ~ N(μ, σ2) and X ~ N(μ, σ2/n)

Then the test statistic: n

XZ/2σ

μ−= ~ N(0, 1) by CLT

With a CI, we want some range on μ, P[-Zα/2 ≤ Z ≤ Zα/2] =α,

P[-Zα/2 ≤ n

X/σμ−

≤ Zα/2] = 1- α

→ probability test statistic between 2 points is 1- α

Confidence Interval

P[-Zα/2 n/σ ≤ μ−X ≤ Zα/2 n/σ ] = 1- α → want a range on μ P[- X + (-Zα/2 n/σ )≤ -μ ≤ - X + (Zα/2 n/σ )] = 1- α P[ X + Zα/2 n/σ ≥ μ ≥ X - Zα/2 n/σ ] = 1- α P[ X - Zα/2 n/σ ≤ μ ≤ X + Zα/2 n/σ ] = 1- α

∴a 100(1- α)% CI (2-sided) on μ is: X - Zα/2 n/σ ≤ μ ≤ X + Zα/2 n/σ

or X ± Zα/2 n/σ ∴A CI is a statistic ± (table value) x standard error.

Confidence Interval

CI on mean, variance unknown Up to now, we have known σ. But typically we do not know, so what do we do?

1. If n ≥ 30, we can replace σ in the CI for the mean

with the sample SD, S= ( )2

1

11

N

jj

XN

μ=

−− ∑ .

2. if n < 30, then if X1, …, Xn ~ N(μ, σ2)

the the test statistic t = nS

X/μ−

~ t-distribution with (n-1)

degrees of freedom.

Confidence Interval

Get a CI for μ P [-tα/2, n-1 ≤ t ≤ tα/2, n-1] = 1-α

P [-tα/2, n-1 ≤ nS

X/μ−

≤ tα/2, n-1] = 1-α

∴100(1- α)% CI on μ is: X - (tα/2, n-1 nS / ) ≤ μ ≤ X + (tα/2, n-1 nS / )

or X ± tα/2, n-1 nS /

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 22: Hypothesis Testing IILecture 22: Hypothesis Testing II

Hypothesis Tests

• If the population variance is unknown, use s of the sample to approximate population variance, since under central limit theorem, s = σ when n > 30. Thus solve the problem as before, using s

• With smaller sample sizes, we have a different problem. But it is solved in the same manner. Instead of using the z distribution, we use the t distribution

Hypothesis Tests

• Using t distribution when:• Sample is small (<30)• Parent population is essentially normal

• Population variance (σ) is unknown

• As n decreases, variation within the sample increases, so distribution becomes flatter.

Tests of Hypotheses

•Two types of hypotheses: Null (H0)and alternative (H1)

µCP_1 CP_2

Sample Size Determination

Student’s t-distribution

• Suppose that X1, X2, …, Xn are n random samples from a normal distribution with mean μ and standard deviation s. Then the PDF of

• is given byns

XT/μ−

=

,,]1)/[(

1)2/(]2/)1[()( 2/)1(2 ∞<<∞−

+Γ+Γ

= + tktkk

ktf kπ.,)(

0

1 knumberpositiveanyfordxexk xk −∞

−∫=Γ

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 23: TwoLecture 23: Two--sample Testssample Tests

Inference for a difference in the means

Under the assumption listed earlier, we have the following result.

Test of hypotheses

Confidence Interval

• Verify that the calculated value of the statistic is 6.05.• Since this value is greater than the table value at 5% level, we reject the null hypothesis and conclude at 5% level that there is sufficient evidence to say that Karlsruhe method produces more strength on the average than the Lehigh method.

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 1_9: ReviewLecture 1_9: Review

Review

• Section 2.1• Examples of experiments• Sample Space & Examples

• Discrete• Continuous

• Events & Examples• Certain event• Null event

• Set Operations• Union• Intersection• Complement

Properties of Probability

a. Many times, occurcan S waysof #occurcan A waysof #

)()()(

SNANAP =

Relative frequency of percentage

b. Properties i. For an event A, P(A) = 1 – P(A)’

P(A’) = 1 – P(A)

ii. If 2 events, A & B are mutually exclusive, then P(A ∩ B) = 0.

iii. For 2 events A & B, P(A∪B) = P(A) + P(B) – P(A∩B)

iv. For 3 events A, B, C

P(A∪B∪C) = P(A) + P(B) + P(C) – P(A∩B) – P(A∩C) - P(B∩C) + P(A ∩ B ∩ C)

Probability Theory

Some Probability Laws Commutative Laws: A ∪ B = B ∪ A A ∩ B = B ∩ A Associative Laws: (A ∪ B) ∪ C = A ∪ (B ∪ C) (A ∩ B) ∩ C = A ∩ (B ∩ C) Distributive Laws: A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C) A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C) De Morgin’s Laws: (A ∪ B)’ = A’ ∩ B’ (A ∩ B)’ = A’ ∪ B’

Review

• Section 2.3• Counting

• Sampling w/ & w/o replacement• Permutations of n objects

Enumeration or Counting Technologies

Summary

With Replacement Without Replacement Order

relevant (ab ≠ ba)

n1 × n2 ×… nr )!(!rn

nPnr −=

Order irrelevant (ab = ba)

!)!1()!1(

rnrn

−+−

)!(!

!rnr

nrn

C nr −

=⎟⎟⎠

⎞⎜⎜⎝

⎛=

1,1 2, 11,2 2, 2

1, 2 2, 1

Review

• Section 2.4• Conditional probability

• Bayes’ Rule

Review

• Section 2.5• Definition of independence

Independence of events

• If the occurrence of an event A doesn’t alter the probability of the occurrence of other event B, then, event A is independent of event B.

[ ][ ] [ | ][ ]

[ ] [ ] [ ]

P A BP B P B AP A

P A B P A P B

∩= =

∩ =

Independence of events

• If the occurrence of an event A doesn’t alter the probability of the occurrence of other event B, then, event A is independent of event B.

[ ][ ] [ | ][ ]

[ ] [ ] [ ]

P A BP B P B AP A

P A B P A P B

∩= =

∩ =

Review

• Section 2.6• Sequences of independent experiments• binomial probability law• multinomial probability law• geometric probability law• sequences of dependent events

Example1Sequences of independent experiment

• Flip coin 3 times experiment• Each sub-experiment is independent. The

probability of heads is ρ

• Sample Space:{ {H,T } X {H,T} X {H,T} }

• Independent• M=m (B1, B2, B3…Bm)

• P[Bi]=pi ; (Σpi=1)• Repeat N time

Binomial vs. Multinomial• Independent• M=2 (Success/ Failure)

• P[S]=p; P[F]= 1-p• Repeat N time

( ) (1 )

for k=0,1,...nn !k !( )!

k n kn

np k p p

k

nk n k

−⎛ ⎞= −⎜ ⎟⎝ ⎠

⎛ ⎞=⎜ ⎟ −⎝ ⎠

1 21 1 2

1 2

![( ,.., )] ...! !.. !

mkk km m

m

nP k k p p pk k k

=

Sequence of dependence experiments

• The outcome of a given experiment determines which sub-experiment is performance next, or more generally, influences the outcome of next experiment.

• Probability of a certain sequence of outcomeP[ {S0}∩ {S1}∩ {S2} ]=P[{S0}]· P[{S1} | {S0}]· P[{S2} | {S0}∩{S1}]

Only recent outcome determines sub-experiment

= P[{S0}]· P[{S1} | {S0}]· P[{S2} | {S1}]

Markov Chains

Review

• Section 3.1• measurement• sample space

• Section 3.2• Discrete RV & PMF• Examples

Probability mass function

( ) ( ) 0 for all x( ) ( ) 1

( ) [ ] ( )

X

Xx Sx

Xx B

I p xII p x

III P X in B p x∈

=

=

Review

• 3.3• expected value and moments• expected value of functions of RVs• Variance of an RV

Expected value (mean)

• Definition:

• Interpretation• Center of gravity• Average in a large number of repetitions (infinite)

[ ] ( ) ( )X

X k X kx S k

E X x p x x p x∈

= ⋅ = ⋅∑ ∑

Review

• Section 3.4• Conditional PMF• conditional expected value

Conditional probability mass function

• Let X be a discrete random variable with a pmf pX(x). Let C be an event with P[C]>0.

The conditional probability mass function of X : pX(x|C)= P[ X=x | C]

P[{ X=x} ∩C]=-----------------------

P[C]

Conditional expected value • E[X|B]

[ | ] ( | )X

Xx S

E X B x p x B∈

= ⋅∑

Review

• Section 3.5• Important discrete RV’s

• Bernoulli• Binomial• Geometric• Negative binomial• Poisson

• Generation of discrete RV’s

Common Families of Discrete Probability Distribution

Bernoulli B(1,p)

p(x) = f(x) =px(1-p)1-x, x = 0, 1 M(t) = 1 – p + pet μ = p, σ2 = p(1 – p) = pq

Binomial B(n, p) p(x) = f(x) = nxpp

xnxn xnx ...,1,0,)1(

)!(!!

=−−

M(t) = (1 – p + pet)n μ = p, σ2 = p(1 – p) = pq

Geometric

p(x) = f(x) = (1-p)x-1p, x = 1, 2, …

M(t) = )1ln(,)1(1

ptep

Pet

t

−−<−−

μ = p1 , σ2 = 22

1pq

pp=

Poisson p(x) = f(x) =

!xex λλ −

, x = 0,1, …

M(t) =eλ(et-1) μ =λ, σ2 = λ

Review

• Section 4.1-4.3• CDF• PDF• Expected value of X

Continuous Random Variables

i. Let X be a continuous r.v. The probability distribution or probability density function (p.d.f.) of X is a function f(x) [i.e. p(x)] such that for any 2 numbers a and b with a < b

∫=≤≤b

a

dxxfbxaP )()(

that is, the probability X takes on a value in the interval [a,b] is the area under the graph ot the density function. In order that f(x) be a legitimate p.d.f, it must satisfy 2 conditions:

1. f(x) ≥ 0, for all x

2. 1)( =∫∞

∞−

dxxf

Expected Value for a Continuous R.V.

a. For the discrete r.v. X, E[X] was obtained by summing xp(x) over possible x values. Here we replace summation by integration and the p.m.f by pdf to get a continuous weighted average.

i. Def: The expected value or mean value of a continuous r.v. X with pdf f(x) is:

∫∞

∞−

== dxxxfXEx )(][μ

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 18: ReviewLecture 18: Review

Topics covered• Section 4.4-4.5

• Important continuous RV’s• Uniform• Exponential• Gaussian• Gamma• Rayleigh• Cauchy• laplacian

• Functions of RV’s• Section 4.6-4.7

• Markov inequality• Chebyshev inequality• transform methods

• characteristic function

• Section 5.1-5.2• Two RV’s• Pairs of Discrete RV’s

• Section 5.3-5.4• joint cdf of x and y

• marginal cdf• joint pdf of two continuous rvs

• Section 5.5-5.7• Independence• Joint moments and expected values of functions• conditional probability and conditional expectation

• Section 5.8-5.9• Function of two RV’s• Pairs of jointly Gaussian RV’s

• Section 6.1-6.2• Vector RV’s• Functions of several RV’s

• Section 6.3-6.4• Expected value of vector RV’s• Jointly Gaussian RV’s

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 10: Continuous RV FamiliesLecture 10: Continuous RV Families

The Uniform Distribution

The cumulative distribution is

12)()(

2)1()(][

)(

)()()(

2abXV

abdxab

xdxxxfXE

abax

aba

abx

ax

abxdxxf

dxxfxXPXF

xx

x

x

−=

+=

−==

−−

=−

−−

=−

==

=≤=

∫∫

∞−∞−

∞−

∞−

Exponential Distribution

i. Pdf: ⎩⎨⎧ >≥

=−

otherwisexe

xfx

00&0

)(λλ λ

where λ = rate at which events occur

ii. Correspondingly,

2

0

1)(

1][

0,1)()(

λ

λ

λ λλ

=

=

≥−==≤= −−∫

XV

XE

xedxexXPxF xx

x

iii. An important application of the exponential distribution is to

model the distribution of component lifetime. A reason for its popularity is because of the “memory-less” property of the Exponential Distribution

Normal Distribution

The normal distribution for parameters values μ = 0, and σ2 = 1 is called the standard normal distribution. A r.v. that has a standard distribution is called a standard normal random variable (denoted by Z). The pdf of Z is:

∞<<∞−=−

zezfz

,21)( 2

2

π

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 11: Lecture 11: ChebychevChebychev, Markov, , Markov, TransformTransform

Markov inequality

• if X is any random variable and a > 0, then

Chebyshev inequality• Chebyshev inequality states that in any data

sample or probability distribution, nearly all the values are close to the mean value, and provides a quantitative description of "nearly all" and "close to".

• In particular,• No more than 1/4 of the values are more than 2

standard deviations away from the mean; • No more than 1/9 are more than 3 standard deviations

away; • No more than 1/25 are more than 5 standard deviations

away; • and so on. In general:• No more than 1/k2 of the values are more than k

standard deviations away from the mean.

{ } 2

2

Prk

kX σμ ≤≥−

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 12: Two RVLecture 12: Two RV’’ss

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 13: Joint CDF/PDFLecture 13: Joint CDF/PDF

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 14: Independence/Lecture 14: Independence/CondCond

If E[XY]=0 then X & Y are orthogonal

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 15: Function of 2 RVLecture 15: Function of 2 RV’’ss

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 16: Vector RVs, Functions of Lecture 16: Vector RVs, Functions of several RVsseveral RVs

The Jacobian of the inverse transformation is given by

It can be shown that

We therefore conclude that the joint pdf of V and W can be found using either of the following expressions :

.det),(⎥⎥⎥

⎢⎢⎢

∂∂

∂∂

∂∂

∂∂

=

wy

vy

wx

vx

wvJ

.),(

1),(yxJ

wvJ =

( )( ) (4.61b)

(4.61a)

wvJwvhwvhf

yxJwvhwvhf

wvf

YX

YXWV

,)),(),,((

,)),(),,((

),(

21,

21,,

=

=

ECE 340Probabilistic Methods in Engineering

M/W 3-4:15

Prof. Vince CalhounProf. Vince Calhoun

Lecture 17: Vector RVLecture 17: Vector RV’’s, Jointly s, Jointly Gaussian RVGaussian RV’’ss

Recommended