Upload
rodney-chen
View
217
Download
0
Embed Size (px)
Citation preview
7/25/2019 So wghats ur name
http://slidepdf.com/reader/full/so-wghats-ur-name 1/12
University of New South Wales
MATH 2901 Assignment
7/25/2019 So wghats ur name
http://slidepdf.com/reader/full/so-wghats-ur-name 2/12
Question 1:Given that there are n number of random variables, the proba-bilityP (X ≤ x) = P (min(X 1.....X n) ≤ x) suggests that there is atleast one X i that is smaller than x. Also, note that the prob-
ability that at least ONE X i is smaller than x is equivalent to1− P (X i ≥ x).Since X i is identically distributed, the probability of all X i isgreater than x is simply (1 − F X (x))n.Hence the probability that at least ONE X i is smaller than x isthen1− (1 − F X (x))n.
b,F X (x) =
x0
f X (t) dt
substituting the given density function above yield:
F X (x) =
x0
f X (t) dt
= 1
β
x0
t
β e−tβ dt
using integration by parts, we get
1
7/25/2019 So wghats ur name
http://slidepdf.com/reader/full/so-wghats-ur-name 3/12
= −1
β
te
−tβ
x0− x0
e−tβ dt
= −1
β xe−xβ − − βe
−tβ
x
0=
1
β
−xe
−xβ +
−βe
−xβ + β
= 1− e
−xβ
x
β + 1
c,
As the lights are connected are in series, if one breaks, all of thelight bulbs will also break. So to find the expected life time of all the bulbs, we just need to find the time taken for the firstlight bulb fails. Thus taking what we have found in part (a)we let T = min (X 1, ...., X n)Then
F T (x) = 1
− 1
−1
−e−xβ 1 +
x
β
n
= 1−
e−xβ
x
β + 1
n
d,Let Y =
√ nT .
Therefore:
F Y (y) = P (Y ≤ y)= P (
√ nT ≤ y)(using the subsitituion)
= P (T ≤ y√ n
)
∴ F y(y) = 1 −
e−y√ nβ
1 +
y√ nβ
n
So as we take n →∞
7/25/2019 So wghats ur name
http://slidepdf.com/reader/full/so-wghats-ur-name 4/12
limn→∞
F Y (y) = limn→∞ 1−
e
−y√ nβ
1 +
y√ nβ
n
= 1− limn
→∞
eln(e
−y√ nβ (1+ y√
nβ))n
by evaulating inside the limit of the ln function
limn→∞ ln
e
−y√ nβ
1 +
y√ nβ
n
= limn→∞n
−y√ nβ
+ n ln
1 +
y√ nβ
From here, by discussing with a fellow peer S.zhu, he pointedout that I should consider the Taylor expansion of ln(1 + x)
limn→∞ ln
1 +
y
√ nβ
=
y
√ nβ − 1
2
y
√ nβ 2
+
1
3 y
√ nβ 3
− .....
Hence
limn→∞n
−y√
nβ +
y√ nβ −
1
2
y√ nβ
2
+ 1
3
y√
nβ
3
− .....
= lim
n→∞
−1
2
y2
β 2 +
1
3
y3
β 3√
n− .
= −y2
2β 2
So limn→∞ 1−
e−y√ nβ
1 + y√
nβ
n= 1 − e
−y2
2β2
e,
From part(d), we have shown that F Y (y) = 1−e−y2
2β2 To computethe expected value, we need the density function,
so f Y (y) = yβ 2
e−y
2
2β2 The density function resembles the Normaldistribution, so:
E (Y ) =
∞0
t2
β 2e−t2
2β2 dt
=
√ 2π
β
∞0
t2
β 2e−t2
2β2 dt
7/25/2019 So wghats ur name
http://slidepdf.com/reader/full/so-wghats-ur-name 5/12
Also, ∞−∞
t2
β 2e−t2
2β2 dt is simply the second moment of the normaldistribution. According to https://mazeofamazement.wordpress.com/2010/07/03/littlebit-more-gaussian/, the second moment can be calculated byE [X ] = σ2 + µ2
E (Y ) =
√ 2π
2β
∞−∞
t2
β 2e−t2
2β2 dt(since the integrand is an even function)
=
√ 2π
2β (02 + β 2)
=
√ 2π
2 β
Hence E (T ) = E (Y )√ n
= 250√ 2π. The answer is so much lowerwhen n=100 as we expect the first light bulb in the circuit tofail is far less than the life of a single light bulb.
Question 2a,Using the fact that z0
pu p−1 du = z p
E (z p) = ∞
0
z pf Z (u)du
=
∞0
z0
pu p−1f Z (u)dudz
by reversing the order of integration and taking horizontal strips,
7/25/2019 So wghats ur name
http://slidepdf.com/reader/full/so-wghats-ur-name 6/12
∞0
∞u
pu p−1f Z (u)dzdu = p
∞o
u p−1
F Z (u)∞u
= p ∞
0
u p−1(1− F Z (u))du
as required
b,l(m) is defined as E (|X − m|). By considering P=1
l(m) =
∞0
(1− F H (u))du
=
∞0
1− P (|X −m| ≤ u)du
= ∞0
1− P (−u ≤ X − m ≤ u)du
=
∞o
1− P (m− u ≤ X ≤ m + u)du
=
∞0
1− F (m + u) + F (m− u)du
Now, by differentiating both sides and using Liebniz Rule, we
get the following expression
d(l(m)
dm =
d
dm
∞0
1− F (m + u) + F (m− u)du
We now apply Liebniz Rule here:
7/25/2019 So wghats ur name
http://slidepdf.com/reader/full/so-wghats-ur-name 7/12
d(l(m)
dm =
∞0
∂
∂m(1 − F (m + u) + F (m− u)du)
= ∞
0
f (m− u)− f (m + u)du
= −F (m + u)− F (m− u)∞0
= −F (m + ∞)− F (m−∞)− (−F (m + 0) − F (m− 0))
= −1 + 2F (m).
Now to find to find the ’stationary points of d(l(m)dm
, we set it to
equal zero. Therefore, we get the following
−1 + 2F (m) = 0
F (m) = 1
2
m = F −1(1
2)
As l(m) is increasing, the stationary point we have found is aminimum. Hence m∗ = F −1 1
2 occurs at the median of the dis-tribution.
Question 3aTo find the maximum Likelihood (mle) of the of α we considerusing the natural logarithm function. This will allow the differ-entiation easier;
7/25/2019 So wghats ur name
http://slidepdf.com/reader/full/so-wghats-ur-name 8/12
lnL(k, α|x) =ni=1
ln
αkα
xα+1i
= nln
(α
) + αnln
(k
)− (α
+ 1)
n
i=1
ln(
xi)
Differentiating the equation above, we get the following:
dln(L(α))
dα =
n
α + nln(k)−
n
i=1
Setting the derivative to equal zero, and re-arranging we get:
n
α =
ni=1
ln(xi)− nln(k)
Introducing α̂ and k̂
α̂
n =
1ni=1 ln(xi)− nln(k)
α̂ =
nni=1 ln(xi)− nln(k)
= nni=1 ln(xi
k )
b,To show that k̂ follows a Pareto distribution, we can find thecumulative distribution function of k̂. From Question 1 (a),
7/25/2019 So wghats ur name
http://slidepdf.com/reader/full/so-wghats-ur-name 9/12
F ̂k(x) = P (min(xi) ≤ X )
= 1− (1− F Xi(x))n
By integrating the density function given within the question,we reach:
F ∗k (x) = 1 −
1−
1− kα
xα
n
= 1
− kαn
xαn
Thus this follows a Pareto distribution with parameters of nα, k
c,The bias of k̂ can be calculated from E (k̂)−k. So if we calculate:
Bias = ∞k x
nαknα
xnα+1 dx− k
= nαknα
∞k
1
xnα dx− k
= nαknα
nα− 1
−1
xnα−1
∞k− k
= nαknα
n1
1
knα−1
− k
= nαk
nα− 1 −k
= k 1
nα − 1
To find the unbiased estimator, we simply just need to do thefollowing calculation:
7/25/2019 So wghats ur name
http://slidepdf.com/reader/full/so-wghats-ur-name 10/12
E (k̂)− k = 0
E (k̂) = k
nαk
nα − 1
= k.
To make the left hand side to equal to k, we need to multiplyE (k̂) by an unbiased estimator. In this case it is the constantnα−1nα
.
d,Let H = min(X 1, X 2, .....X n)By question 1(a), it is clear to see that:
F H (x) = 1 − (1 − F X (x))n
= 1 −
1−
1− kα
xα
n
= 1 −
kα
xα
n
= 1
− kαn
xαn
Hence, it follows the Pareto Distribution with parameters αn, k
Question 4aFrom watching the 19/05/2016 MATH 2901 lecture video,
limn→∞
E
|X n − X |1 + |X n − X |
= lim
n→∞E
I |X n−X |≤
|X n − X |1 + |X n − X |
+ limn→∞E
I
|X n−X |≥
|X n − X |1 + |X n − X |
Now, limn→∞ E
I
|X n−X
|≤
|X n−X |1+|X n−X |
is limn→∞ E
I
|X n−X
|≤
11+|X n−X |
7/25/2019 So wghats ur name
http://slidepdf.com/reader/full/so-wghats-ur-name 11/12
as it is bounded by .
Also, limn→∞ E
I |X n−X |≥
|X n−X |1+|X n−X |
can be re-written as limn→∞ E
I |X n−X |≥
as x
1+x ≤1.
So now we can write:
limn→∞E
|X n − X |1 + |X n −X |
= lim
n→∞E
I
|X n−X |≤
1
1 + |X n −X |
+ lim
n→∞E
I
|X n−X |≥
≤ limn
→∞E I
|X n−X
|≤+ lim
n
→∞P (|X n − X | ≥ )
≤
As is positive and real and our limit is less than it, we can
conclude that limn→∞ E |X n−X |1+|X n−X |
= 0
part(b),Using the hint provided and a late night discussion with L.Wright,
the function f (x) = x1+x is increasing. So;
I |X n−X |>
1 + ≤ |X n −X |
1 + |X n − X |E
I |X n−X |>
1 +
≤ E
|X n − X |1 + |X n − X |
limn
→∞
1 + P (|X n − X | > ) ≤ lim
n
→∞
E I |X n−X |>
1 + = 0(by part(a))
As 1+
is simply a constant, we can conclude that P (|X n −X | > ) =0. This is the definition of convergence in probability,i.e X n
p→ X
7/25/2019 So wghats ur name
http://slidepdf.com/reader/full/so-wghats-ur-name 12/12
BibliographyNA. 2010. Little bit more Gaussian. [ONLINE] Available at:https://mazeofamazement.wordpress.com/2010/07/03/little-bit-
more-gaussian/. [Accessed 22 May 2016].
Joseph Lee Petersen. 2012. Estimating the Parameters of aPareto Distribution. [ONLINE] Available at: http://citeseerx.ist.psu.edu/viewdoc/dow[Accessed 22 May 2016].