12
Section 10.6 Recall from calculus: lim = lim = lim = x x y 1 1 + x x 1 1 + x kx k 1 + y y e e k (Let y = kx in the previous limit.) e k If derivatives of f(x) up to order k are all continuous on an interval about 0 (zero), then for all x on this interval, we have f(x) = f(0) + (x – 0) f [1] (0) + (x – 0) 2 f [2] (0) —————– + 2! for 0 < h < x . (x – 0) 3 f [3] (0) —————– + … 3! (x – 0) k f [k] (h) + —————– . k!

Section 10.6 Recall from calculus: lim= lim= lim= x y 1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives

  • View
    233

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives

Section 10.6Recall from calculus:

lim = lim = lim =x x y

11 + — x

x

11 + — x

kx k1 + — y

y

e ek

(Let y = kx in the previous limit.)

ek

If derivatives of f(x) up to order k are all continuous on an interval about 0 (zero), then for all x on this interval, we have

f(x) = f(0) + (x – 0) f [1](0) + (x – 0)2 f [2](0) —————– + 2!

for 0 < h < x .

(x – 0)3 f [3](0) —————– + … 3!

(x – 0)k f [k](h)+ —————– . k!

Page 2: Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives

1.

(a)

Let X1 , X2 , … , Xn be a random sample from a Bernoulli distribution with success probability p. The following random variables are defined:

V = X = W = np(1 – p)

n Xi i = 1

n

n Xi i = 1

Find the m.g.f. for each of V and X.

From Corollary 5.4-1, we have that

(1) the m.g.f. of the random variable V = is

MV (t) =

(2) the m.g.f. of the random variable X = is

MX (t) =

n Xi i = 1 n

(1 – p + pet) =i = 1

(1 – p + pet)n .

(We recognize that V has a distribution.)b(n,p)

n Xi – np i = 1

V— n

MV( ) = t— n

(1 – p + pet / n)n .

Page 3: Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives

(b) Find the limiting distribution for V with np equal to a given constant λ as n tends to infinity, forcing p to go to 0 (zero).

Since np = is fixed, then

lim MV(t) = lim (1 – p + pet)n =n n

np npet

lim 1 – — + —— = n nn

n

et

lim 1 – — + —— = n nn

n (et – 1)

lim 1 + ———— = nn

n

e(et – 1)

The limiting distribution of V is a distribution.Poisson()

Consequently, for small (or large!) values of p, a binomial distribution can be approximated by a Poisson distribution with mean = np. This should not be surprising, since the Poisson distribution was derived as the limit of a sequence of binomial distributions where p tended to zero.

Page 4: Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives

Find the limiting distribution for V as n tends to infinity, with p a fixed constant.

lim MV(t) = lim (1 – p + pet)n =n n

We cannot find a limiting distribution for V.

1.-continued

(c)

Page 5: Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives

(d) Find the limiting distribution for X as n tends to infinity, with p a fixed constant.

lim MX(t) = lim (1 – p + pet/n)n =n n

lim (1 – p + p[1 + t/n + (t/n)2/2! + (t/n)3/3! + …])n =n

lim (1 + p[t/n + (t/n)2/2! + (t/n)3/3! + …])n =n

lim 1 + =n

pt + pt2/(2!n) + pt3/(3!n2) + …————————————

n

n

It is intuitively obvious that all terms in the numerator except the first go to 0 as n , and (from advanced calculus) the terms going to 0 can be ignored.

lim 1 + =n

pt— n

n

ept

This is the moment generating function corresponding to a distribution where the value p has probability 1 (one).

Page 6: Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives

Suppose X1 , X2 , … , Xn is a random sample from any distribution with finite mean and finite variance 2. Let M(t) be the common moment generating function of Xi , that is, for each i = 1, 2, …, n, we have

M(t) =

From Corollary 5.4-1(b), we have that the moment generating function

of the random variable X = is

MX (t) =

E(e )tXi

n Xi i = 1

n

n

M( ) =i = 1

t— n

M( ) . t— n

n

With M(t) and M /(t) both continuous on an interval about 0 (zero), we have that for all t on this interval,

M(t) = M(0) + t M /(h) = 1 + t M /(h) for 0 < h < t .

Page 7: Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives

Consequently, we have that for all t on this interval,

MX (t) = M( ) = t— n

n1 + M /(h) =

t— n

n

1 + [M /(h) – M /(0)] t— + n

n t— n for 0 < h < .

t— n

To investigate the limiting distribution of X as n, we consider

nlim MX (t) = lim

n 1 + [M /(h) – M /(0)] t + t

————————— n

n

It is intuitively obvious that the second term in the numerator goes to 0 as n , and (from advanced calculus) this term can be ignored.

= limn 1 +

t— = n

net

This is the moment generating function corresponding to a distribution where the value has probability 1 (one).

Page 8: Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives

For i = 1, 2, …, n, suppose Yi = , and let

W = .

Let m(t) be the common m.g.f. for each Yi . Then for each i = 1, 2, …, n, we have E(Yi) = m /(0) = , and Var(Yi) = E(Yi

2) = m //(0) = .

Xi – ———

n Yi i = 1

n=

n Xi – n i = 1

(n)=

X – / n

0 1From Theorem 5.4-1, we have that the moment generating function of the random variable W is

MW (t) = n

m( ) =i = 1

t—n

m( ) . t—n

n

With m(t) and m /(t) both continuous on an interval about 0 (zero), we have that for all t on this interval,

m(t) = m(0) + t m /(0) + 1 + t2 m //(h) for 0 < h < t .t2 m //(h) = 1— 2

1— 2

Page 9: Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives

Consequently, we have that for all t on this interval,

MW (t) = m( ) = t—n

n1 + m //(h) =

t2

—2n

n

1 + [m //(h) – m //(0)] t2

— (1) +2n

n t2

—2n for 0 < h < .

t—n

To investigate the limiting distribution of W as n, we consider

nlim MW (t) = lim

n 1 + [m //(h) – m //(0)]t2 / 2 + (t2 / 2)

————————————–n

n

It is intuitively obvious that the second term in the numerator goes to 0 as n , and (from advanced calculus) this term can be ignored.

= limn 1 +

t2 / 2—— = n

ne

This is the moment generating function corresponding to astandard normal (N(0,1)) distribution.

t2 / 2

Page 10: Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives

This proves the following important Theorem in the text:

Central Limit Theorem Theorem 5.6-1

Page 11: Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives

1.-continued

(e) Find the limiting distribution for W as n tends to infinity, with p a fixed constant.

From the Central Limit Theorem, we have that limiting

distribution for W =

is

np(1 – p)

n Xi – np i = 1

= X – p—————–p(1 – p) / n

a N(0,1) (standard normal) distribution.

For each i, = E(Xi) = , and 2 = Var(Xi) = .p p(1 p)

n

n Xi – n i = 1

=

Page 12: Section 10.6 Recall from calculus: lim= lim= lim= x  y  1 1 + — x x 1 1 + — x kx k 1 + — y y eekek (Let y = kx in the previous limit.) ekek If derivatives

2.

(a)

(b)

Suppose Y has a b(400, p) distribution, and we want to approximate P(Y 3).

If p = 0.001, explain why a Poisson distribution can be expected to give a good approximation of P(Y ≥ 3), and use the Poisson approximation to find this probability.

In Class Exercise #1(b), we found that the limiting distribution of a sequence of b(n, p) distributions as n tends to infinity is Poisson when np remains fixed, which forces p to tend to 0 (zero). This suggests that the Poisson approximation to a binomial distribution is better when p is close to zero (or one).

What other distribution may potentially be used to approximate a binomial probability when p is not sufficiently close to zero (or one)?

= np = (400)(0.001) = 0.4 P(Y 3) =

1 – 0.992 = 0.008

The Central Limit Theorem tells us that with a sufficiently large sample size n, the normal distribution can be used.