Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
LUND UNIVERSITY
PO Box 117221 00 Lund+46 46-222 00 00
On the asymptotic and approximate distributions of the product of an inverse Wishartmatrix and a Gaussian vector
Kotsiuba, Igor; Franko, Ivan; Mazur, Stepan
2015
Link to publication
Citation for published version (APA):Kotsiuba, I., Franko, I., & Mazur, S. (2015). On the asymptotic and approximate distributions of the product of aninverse Wishart matrix and a Gaussian vector. (Working Papers in Statistics; No. 11). Department of Statistics,Lund university.
General rightsUnless other specific re-use rights are stated the following general rights apply:Copyright and moral rights for the publications made accessible in the public portal are retained by the authorsand/or other copyright owners and it is a condition of accessing publications that users recognise and abide by thelegal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private studyor research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal
Read more about Creative commons licenses: https://creativecommons.org/licenses/Take down policyIf you believe that this document breaches copyright please contact us providing details, and we will removeaccess to the work immediately and investigate your claim.
On the asymptotic and approximate distributions of the product of an inverse Wishart matrix and a Gaussian vector
IGOR KOTSIUBA, IVAN FRANKO NATIONAL UNIVERSITY OF LVIV
STEPAN MAZUR, LUND UNIVERSITY
Working Papers in StatisticsNo 2015:11Department of StatisticsSchool of Economics and ManagementLund University
ON THE ASYMPTOTIC AND APPROXIMATE DISTRIBUTIONS OF
THE PRODUCT OF AN INVERSE WISHART MATRIX AND A
GAUSSIAN VECTOR
I. KOTSIUBA AND S. MAZUR
Abstract. In this paper we study the distribution of the product of an inverse
Wishart random matrix and a Gaussian random vector. We derive its asymptotic
distribution as well as its approximate density function formula which is based on theGaussian integral and the third order Taylor expansion. Furthermore, we compare
obtained asymptotic and approximate density functions with the exact density whichis obtained by Bodnar and Okhrin (2011). A good performance of obtained results
is documented in the numerical study.
1. Introduction
The multivariate normal distribution is one of the most important and very usefuldistribution in multivariate statistical analysis. If we have sample of size n from the k-variate normal distribution then the sample unbiased estimators of the mean vector andthe covariance matrix have k-variate normal and k-variate Wishart distributions, respec-tively, and they are independently distributed (see Section 3 of Mardia et al. (1980)).
Recently, several papers study the investigation of properties of the estimators formean vector and covariance matrix. For instance, Stein (1956) and Jorion (1986) discussimprovement techniques for the mean estimator. Ledoit and Wolf (2004), Bodnar andGupta (2010), Bai and Shi (2011), Cai et al. (2011), Cai and Yuan (2012), Cai andZhou (2012), and Bodnar et al. (2014a, 2015a) improve the estimation techniques forthe (inverse) covariance matrices. Papers by von Rosen (1988), Styan (1989), Dıaz-Garcıa et al. (1997), Bodnar and Okhrin (2008), Drton et al. (2008), and Bodnar etal. (2015b) develop the distributional properties of the Wishart matrices, the inverseWishart matrices and related quantities.
However, functions that depend on the product of the (inverse) Wishart random ma-trix and the Gaussian random vector are not comprehensively investigated in literature.Bodnar and Okhrin (2011), Bodnar et al. (2013, 2014b) derived the exact distributionsof the product of the (inverse) Wishart random matrix and the Gaussian random vector,which have integral representations. We note that these products have direct applicationsin the portfolio theory and in the discriminant analysis. For example, the elements of thesample estimator of the discriminant function are expressed as products of the inverseWishart random matrix and the Gaussian random vector, and the weights of the tan-gency portfolio are estimated by the same product. In this paper we extend the resultsobtained by Bodnar and Okhrin (2011) by providing the asymptotic and approximatedensity functions of this product.
Received 02/03/2015.2010 Mathematics Subject Classification. 62E17, 62E20.Key words and phrases. Wishart distribution, multivariate normal distribution, asymptotic distribu-
tion, integral approximation.The second author appreciate the financial support of the Swedish Research Council Grant Dnr: 2013-5180 and Riksbankens Jubileumsfond Grant Dnr: P13-1024:1.
1
2 I. KOTSIUBA AND S. MAZUR
The rest of the paper is structured as follows. The main results are presented inSection 2, where the asymptotic distribution of the product of an inverse Wishart randommatrix and a Gaussian random vector is derived as Theorem 2.1. Its approximate densityfunction which is based on the third order Taylor series approximation is obtained inTheorem 2.2. The numerical performance of the obtained results is outlined in details inSection 3, while Section 4 summarizes the paper.
2. Main Results
In this section we consider the product of the inverse Wishart random matrix and anormally distributed random vector. In particular, we derive its asymptotic and approx-imative density functions.
Let A ∼ Wk(n,Σ), i.e. the random matrix A has the k-dimensional Wishart distri-bution with n degrees of freedom and covariance matrix Σ, which is positive definite.We consider the case when k < n, i.e. A is a non-singular random matrix. Also, letz ∼ Nk(µ, λΣ), it means that the random vector z has the k-dimensional normal dis-tribution with mean vector µ and covariance matrix λΣ, where λ > 0 is a constant.Furthermore, let L is the p × k non-zero matrix of constants of rank(L) = p < k, thesymbol 0 denotes the suitable vector of zeros, and Ik stands for the k×k identity matrix.Throughout the paper it is assumed that A and z are independently distributed. Thenit holds that the distribution of LA−1z|(z = z∗) equals to the distribution of LA−1z∗.It is noted that
LA−1z∗d= z∗TΣ−1z∗
LA−1z∗
z∗TA−1z∗z∗TA−1z∗
z∗TΣ−1z∗, (1)
where the symbold= denotes the equality in distribution.
Using Theorem 3.2.12 of Muirhead (1982) we obtain that
z∗TΣ−1z∗
z∗TA−1z∗∼ χ2
n−k+1
and is independently distributed of z∗. Furthermore, it holds that z∗TA−1z∗ is in-dependent of LA−1z∗/z∗TA−1z∗ for given z∗ (see Theorem 3 of Bodnar and Okhrin(2008)). Consequently, it is independent of z∗TΣ−1z∗LA−1z∗/z∗TA−1z∗ and respec-tively of zTΣ−1zLA−1z/zTA−1z. Using the proof of Theorem 1 of Bodnar and Schmid(2008) it follows that
z∗TΣ−1z∗LA−1z∗
z∗TA−1z∗∼ tp
(n− k + 2; LΣ−1z∗,
1
n− k + 2z∗TΣ−1z∗LRz∗LT
), (2)
where Rz∗ = Σ−1 −Σ−1z∗z∗TΣ−1/z∗TΣ−1z∗. The symbol tp(d; b; B) stands for the p-dimensional multivariate t-distribution with d degrees of freedom, the location parameterb, and the dispersion matrix B.
Let ξ = z∗TΣ−1z∗/z∗TA−1z∗. Then ξ and z are independent and ξ ∼ χ2n−k+1. As a
result, we obtain the following stochastic representation of LA−1z which is given by
LA−1zd= ξ−1
LΣ−1z +
√zTΣ−1z
n− k + 2(LRzL
T )1/2t0
, (3)
where Rz = Σ−1 − Σ−1zzTΣ−1/zTΣ−1z; ξ ∼ χ2n−k+1, z ∼ Nk(µ, λΣ), and t0 ∼
tp(n− k + 2,0p, Ip). Moreover, ξ, z, and t0 are independently distributed.
Next, we consider the asymptotic distribution of nLA−1z. It is remarkable thatusually λ = 1/n (see Section 3 of Muirhead (1982)). Then
E(nLA−1z) =n
n− k − 1LΣ−1µ. (4)
ASYMPTOTIC AND APPROXIMATE DISTRIBUTIONS 3
Also, it is easy to see that
limn→∞
ξ
n= 1 and lim
n→∞z = µ. (5)
Thus, from (3), (4), (5), and Theorem 6.15 of Arnold (1990), we get the asymptoticdistribution of nLA−1z which is presented in the following theorem.
Theorem 2.1. Let matrix A ∼ Wk(n,Σ) and vector z ∼ Nk(µ, λΣ) with Σ positivedefinite and λ = 1/n. Furthermore, let A and z be independent and L be a p× k matrixof constants of rank(L) = p < k. Then the asymptotic distribution of nLA−1z, i.e.n→∞, is given by
√n(nLA−1z− LΣ−1µ
) d→ Np(0, Σ), (6)
where Σ = (1 + µTΣ−1µ)LΣ−1LT − LΣ−1µµTΣ−1LT .
From Theorem 2.1 we get that the asymptotic distribution of the random vectornLA−1z has a p-variate normal distribution.
In many applications the sample size n is not large. For this reason, we need anadjustment like a finite-sample approximation of the density. Bodnar and Okhrin (2011)derived an exact density function of LA−1z. However, density function is expressed asa four-dimensional integral of the generalized function. Hence, the aim of this paper isto derive a simpler approximative density function of the random vector LA−1z.
Let Q = Σ−1/2LT(LΣ−1LT
)−1LΣ−1/2, P = Σ−1/2LT
(LΣ−1LT
)−1/2, S1 = r1r
T1 ,
S2 = r3rT3 , r1 = Σ−1/2µ, r2 = PT r1, r3 = (Ik −Q)r1, r4 = Qr1, r5 = −(p − 1)/ar1 +
1/br3, a = rT1 r1, b = rT1 (Ik −Q)r1, c = rT1 Qr1. It is noted that Q is a k × k singular
and projection matrix with Q = PPT .In the next theorem we suggest the approximate density function of LA−1z which is
based on the Gaussian integral and the third order Taylor series expansion.
Theorem 2.2. Let matrix A ∼ Wk(n,Σ) and vector z ∼ Nk(µ, λΣ) with Σ positivedefinite. Furthermore, let A and z be independent and L be a p× k matrix of constantsof rank(L) = p < k. Then the approximate density function of LA−1z is given by
fLA−1z(y) ≈ |LΣ−1LT |−1/2
a(p−1)/2b1/2πp/2
Γ(p+n−k+2
2
)Γ(n−k+2
2
)×∫ +∞
0
f−(p+n−k+2)/22 (sy,0)
[1 +
λ
2tr[g2(sy)]
]×spfχ2
n−k+1(s)ds, (7)
4 I. KOTSIUBA AND S. MAZUR
where y = s(LΣ−1LT )−1/2y, f2(sy,0) = 1 + 1a
(yT y − 2yT r2 + c+ (c−yT r2)2
b
),
ω(sy) = −(f2(sy,0)− 1)r1 + r4 −Py +c− yrT2
b(2r4 −Py)− (c− yr2)2
b2r3,
g2(sy) =p− 1
a
(2
aS1 − Ik
)+
1
b
(2
bS2 − (Ik −Q)
)+ r5
(rT5 +
2(p+ n− k + 2)
af2(sy,0)ωT (sy)
)+
p+ n− k + 2
a2f22 (sy,0)
{p+ n− k + 4
a2ω(sy)ωT (sy)
+ f2(sy,0)
[a(f2(sy,0)− 1)
(Ik −
4
aS1
)− 4r1 [ω(sy) + (f2(sy,0)− 1)r1]
T
]− aQ +
a(c− yT r2)
b2
{b(2r4 −Py)
[2
brT3 +
2rT4 − yTPT
c− yT r2
]+ 2bQ
− 2r3(2rT4 − yTPT )− (c− yT r2)(Ik −Q)− 4
[2r4 −Py − c− yT r2
br3
]rT3
}}.
The symbol fχ2n−k+1
(·) denotes the density of the χ2-distributed random variable with
(n− k + 1) degrees of freedom.
Proof. From (2) the unconditional density function of zTΣ−1zLA−1z/zTA−1z is givenby
fzT Σ−1z LA−1z
zT A−1z
(y) =
∫Rk
fz∗T Σ−1z∗ LA−1z∗
z∗A−1z∗(y|z = z∗)fz(z∗) dz∗
= λ−k/2|Σ|−1/2
(2π)k/2
Γ(p+n−k+2
2
)πp/2Γ
(n−k+2
2
)×∫Rk
exp
(− (z∗ − µ)TΣ−1(z∗ − µ)
2λ
)|LRz∗LT |−1/2
(z∗TΣ−1z∗)p/2
×(
1 +(y − LΣ−1z∗)T (LRz∗LT )−1(y − LΣ−1z∗)
z∗TΣ−1z∗
)−(p+n−k+2)/2
dz∗.
From Theorem 18.2.8 of Harville(1997) it follows that(LRz∗LT
)−1=
(LΣ−1LT
)−1+
(LΣ−1LT
)−1LΣ−1z∗z∗TΣ−1LT
(LΣ−1LT
)−1
z∗TΣ−1z∗ − z∗TΣ−1LT(LΣ−1LT
)−1LΣ−1z∗
,
∣∣LRz∗LT∣∣ =
∣∣LΣ−1LT∣∣ z∗TΣ−1z∗ − z∗TΣ−1LT
(LΣ−1LT
)−1LΣ−1z∗
z∗TΣ−1z∗.
Using the transformation t = Σ−1/2(z∗−µ) in the last integral with Jacobian |Σ|1/2 weget
fzT Σ−1z LA−1z
zT A−1z
(y) = λ−k/2|LΣ−1LT |−1/2
(2π)k/2
Γ(p+n−k+2
2
)πp/2Γ
(n−k+2
2
)×∫Rk
exp
(−tT t
2λ
) [(t + r1)T (t + r1)
]−(p−1)/2
[(t + r1)T (Ik −Q)(t + r1)]1/2
×{
1 +1
(t + r1)T (t + r1)
[yT y − 2yTPT (t + r1) + (t + r1)TQ(t + r1)
+((t + r1)TQ(t + r1)− yTPT (t + r1))2
(t + r1)T (Ik −Q)(t + r1)
]}−(p+n−k+2)/2
dt,
ASYMPTOTIC AND APPROXIMATE DISTRIBUTIONS 5
where r1 = Σ−1/2µ, y = (LΣ−1LT )−1/2y, P = Σ−1/2LT (LΣ−1LT )−1/2, and Q =
PPT .Let
f1(t) =
[(t + r1)T (t + r1)
]−(p−1)/2
[(t + r1)T (Ik −Q)(t + r1)]1/2
and f2(y, t) = 1 +1
(t + r1)T (t + r1)f2(y, t)
with
f2(y, t) = yT y − 2yTPT (t + r1) + (t + r1)TQ(t + r1)
+
[(t + r1)TQ(t + r1)− yTPT (t + r1)
]2(t + r1)T (Ik −Q)(t + r1)
.
Next, we approximate the function f1(t)f−(p+n−k+2)/22 (y, t) using a Taylor series ex-
pansion of the third order at t = 0. Notice, that the integrals of summands in Taylorseries expansion, which correspond to the odd derivatives at point t = 0 are all zero asthe moments of the odd order from the multivariate normal distribution with zero meanvector. As a result, we get the next approximation
f1(t)f−(p+n−k+2)/22 (y, t) = g0(y) +
1
2tT g2(y)t + o(||t||4), (8)
where
g0(y) = f1(0)f−(p+n−k+2)/22 (y,0),
g2(y) =∂2[f1(t)f
−(p+n−k+2)/22 (y, t)
]∂t∂tT
∣∣∣∣∣∣t=0
= f−(p+n−k+2)/22 (y,0)
∂2f1(t)
∂t∂tT
∣∣∣∣t=0
+ 2∂f1(t)
∂t
∣∣∣∣t=0
∂f−(p+n−k+2)/22 (y, t)
∂tT
∣∣∣∣∣t=0
+ f1(0)∂2f
−(p+n−k+2)/22 (y, t)
∂t∂tT
∣∣∣∣∣t=0
.
It is easy to see that∂f1(t)
∂t=ϕ1(t)
ϕ2(t)
with
ϕ1(t) = −(p− 1)[(t + r1)T (t + r1)
]−1(t + r1)
−[(t + r1)T (Ik −Q)(t + r1)]−1(Ik −Q)(t + r1),
ϕ2(t) =[(t + r1)T (t + r1)
](p−1)/2[(t + r1)T (Ik −Q)(t + r1)]1/2.
Then the second order derivative ∂2f1(t)∂t∂tT
can be computed as
∂2f1(t)
∂t∂tT=ϕ2(t)∂ϕ1(t)
∂tT− ϕ1(t)∂ϕ2(t)
∂tT
ϕ22(t)
,
where
∂ϕ1(t)
∂tT=
p− 1
(t + r1)T (t + r1)
[2
(t + r1)(t + r1)T
(t + r1)T (t + r1)− Ik
]+
1
(t + r1)T (Ik −Q)(t + r1)
×[2
(Ik −Q)(t + r1)(t + r1)T (Ik −Q)
(t + r1)T (Ik −Q)(t + r1)− (Ik −Q)
],
∂ϕ2(t)
∂tT=
[(t + r1)T (t + r1)
](p−1)/2 [(t + r1)T (Ik −Q)(t + r1)
]1/2×{
(t + r1)T (Ik −Q)
(t + r1)T (Ik −Q)(t + r1)+
p− 1
(t + r1)T (t + r1)(t + r1)T
}.
6 I. KOTSIUBA AND S. MAZUR
Similarly, the first order partial derivative of f−(p+n−k+2)/22 (y, t) is determined by the
following formula
∂f−(p+n−k+2)/22 (y, t)
∂t= −(p+ n− k + 2)
f−(p+n−k+4)/22 (y, t)
(t + r1)T (t + r1)
×
[− f2(y, t)
(t + r1)T (t + r1)(t + r1) +
1
2
∂f2(y, t)
∂t
], (9)
where
∂f2(y, t)
∂t= −2Py + 2Q(t + r1) + 2
ψ1(y, t)− ψ2(y, t)
ψ3(t),
ψ1(y, t) =[(t + r1)T (Ik −Q)(t + r1)
] [(t + r1)TQ(t + r1)− yTPT (t + r1)
]× [2Q(t + r1)−Py] ,
ψ2(y, t) =[(t + r1)TQ(t + r1)− yTPT (t + r1)
]2(Ik −Q)(t + r1),
ψ3(t) =[(t + r1)T (Ik −Q)(t + r1)
]2.
Using (9) we can calculate the second partial derivative of f−(p+n−k+2)/22 (y, t) which
has the following representation
∂2f−(p+n−k+2)/22 (y, t)
∂t∂tT=
(p+ n− k + 2)
[(t + r1)T (t + r1)]2 f−(p+n−k+6)/22 (y, t) {(p+ n− k + 4)
×
[− f2(y, t)
(t + r1)T (t + r1)(t + r1) +
1
2
∂f2(y, t)
∂t
]
×
[− f2(y, t)
(t + r1)T (t + r1)(t + r1) +
1
2
∂f2(y, t)
∂t
]T
+f2(y, t)
[f2(y, t)
(Ik −
4(t + r1)(t + r1)T
(t + r1)T (t + r1)
)+2(t + r1)
∂f2(y, t)
∂tT− (t + r1)T (t + r1)
2
∂2f2(y, t)
∂t∂tT
]},
ASYMPTOTIC AND APPROXIMATE DISTRIBUTIONS 7
where
∂2f2(y, t)
∂t∂tT= 2Q + 2
ψ3(t)(∂ψ1(y,t)∂tT
− ∂ψ2(y,t)∂tT
)− (ψ1(y, t)− ψ2(y, t))∂ψ3(t)
∂tT
ψ23(t)
,
∂ψ1(y, t)
∂tT=
[(t + r1)T (Ik −Q)(t + r1)
] [(t + r1)TQ(t + r1)− yTPT (t + r1)
]×{
2Q + [2Q(t + r1)−Py]
[2(t + r1)T (Ik −Q)
(t + r1)T (Ik −Q)(t + r1)
+2(t + r1)TQ− yTPT
(t + r1)TQ(t + r1)− yTPT (t + r1)
]},
∂ψ2(y, t)
∂tT=
[(t + r1)TQ(t + r1)− yTPT (t + r1)
]2×
[2(Ik −Q)(t + r1)
[2(t + r1)TQ− yTPT
](t + r1)TQ(t + r1)− yTPT (t + r1)
+ (Ik −Q)
],
∂ψ3(t)
∂tT= 4
[(t + r1)T (Ik −Q)(t + r1)
](t + r1)T (Ik −Q).
By taking all partial derivatives at t = 0 and putting them in (8), we obtain the Taylor
series expansion of the function f1(t)f−(p+n−k+2)/22 (y, t). Then, the approximate density
function for zTΣ−1zLA−1z/zTA−1z is given by
fzT Σ−1z LA−1z
zT A−1z
(y) ≈ λ−k/2|LΣ−1LT |−1/2
(2π)k/2
Γ(p+n−k+2
2
)πp/2Γ
(n−k+2
2
)×
∫Rk
[g0(y) +
1
2tT g2(y)t
]exp
(−tT t
2λ
)dt.
Using Harville (1997, p. 321), the last integral leads to
fzT Σ−1z LA−1z
zT A−1z
(y) ≈ |LΣ−1LT |−1/2
πp/2
Γ(p+n−k+2
2
)Γ(n−k+2
2
) [g0(y) +
λ
2tr[g2(y)]
].
Finally, applying the facts that zTΣ−1zLA−1z/zTA−1z and zTA−1z/zTΣ−1z areindependently distributed, and zTΣ−1z/zTA−1z ∼ χ2
n−k+1 we obtain that
fLA−1z(y) =
∫ +∞
0
fzT Σ−1z LA−1z
zT A−1z
(ys)fχ2n−k+1
(s)spds.
Summarizing the above, we get the statement of the theorem. �
3. Numerical study
In this section we compare the area under the approximative density which is de-rived in Theorem 2.2, with the target value of unity. Moreover, the asymptotic and theapproximate densities obtained in Section 2 are compared with the exact one which isderived by Bodnar and Okhrin (2011, Theorem 1). The comparison is done for λ = 1/n,µ = (1, ..., 1)T , and Σ = Ik.
In the first numerical study, we investigate the performance of the approximate densityfunction which is obtained in Theorem 2.2. For this reason the area under the approx-imative density function is compared with one. This comparison allows us to concludehow many information is ignored by deleting the moments of order higher than two whichare positive. If the deviation from 1 is very big, then we should include the moments ofhigher order in the Taylor series expansion.
8 I. KOTSIUBA AND S. MAZUR
HHHHHn
k2 4 6 8 10
30 0.999974 1.000000 1.000000 1.000000 1.000000
60 1.000000 0.999917 0.999943 0.999972 0.999989
90 1.000000 0.999999 0.999993 0.999946 0.999475
120 1.000000 0.999999 0.999902 0.999898 0.999494
Table 1. Area under the approximate density function which is givenin Theorem 2.2 for n ∈ {30, 60, 90, 120}, k ∈ {2, 4, 6, 8, 10}, p = 1,l = (1, 0 . . . , 0)T .
HHHHHn
k11 12 13 14 15
30 0.999766 0.999678 0.999753 0.999958 0.999968
60 0.999662 0.999568 0.999893 0.9997898 0.999734
90 0.999659 0.999779 0.999753 0.999582 0.9987399
120 0.999677 0.999738 0.999431 0.999578 0.998789
Table 2. Area under the approximate density function which is givenin Theorem 2.2 for n ∈ {30, 60, 90, 120}, k ∈ {11, 12, 13, 14, 15}, p = 10,L = (Ip, 0p,k−p).
The results of the first simulation study are presented in Tables 1 and 2 for a severalvalues of k with p = {1, 10}, and L = (Ip, 0p,k−p). From Tables 1 and 2 we observe thatour approximate density works quite good for different values p.
In the second numerical study, we compare the asymptotic and the approximate den-sity functions with the exact one. The results are given for k = {2, 4}, p = 1, andlT = (1, 0, ..., 0). We note that in this case the exact density is given as the two-dimensional integral (see Theorem 1 b) of Bodnar and Okhrin (2011)). Also, it is re-markable that all integrals can be easily evaluated using any mathematical software,e.g., Mathematica. Here, in Figure 1, the Taylor series approximation is shown by theshort-dashed line, the asymptotic density by the long-dashed line, and the exact one bythe solid line. We observe that the approximative density function coincides with theexact one. Furthermore, we observe that they are slightly skewed to the right. Also, wesee that the asymptotic density leads very closed to the exact one and the approximatedensities.
4. Summary
In this paper, the product of a random matrix which includes an inverse Wishartdistributed and a normally distributed vector is studied. We derived its asymptoticdistribution. Moreover, we obtained the formula of an approximative density of linearfunctions of the product which is based on the Gaussian integral and the third orderTaylor expansion. In the numerical study we documented the good performance of theapproximate and asymptotic densities.
ASYMPTOTIC AND APPROXIMATE DISTRIBUTIONS 9
0.01 0.02 0.03 0.04x
20
40
60
80
f (x)
Asymptotics
Taylor series
Exact
0.004 0.006 0.008 0.010 0.012 0.014 0.016x
50
100
150
200
250
f (x)
Asymptotics
Taylor series
Exact
a) k = 2, n = 60, l = (1, 0)T b) k = 2, n = 120, l = (1, 0)T
0.01 0.02 0.03 0.04x
10
20
30
40
50
60
70
f (x)
Asymptotics
Taylor series
Exact
0.005 0.010 0.015x
50
100
150
200
f (x)
Asymptotics
Taylor series
Exact
c) k = 4, n = 60, l = (1, 0, 0, 0)T d) k = 4, n = 120, l = (1, 0, 0, 0)T
Figure 1. The exact density and its two approximations as given inSection 2 with k ∈ {2, 4}, n ∈ {60, 120}, and l ∈ {(1, 0)T , (1, 0, 0, 0)T }.
5. Acknowledgements
The authors are thankful to Professor Yuliya Mishura and two anonymous Reviewersfor careful reading of the paper and for their suggestions which have improved an earlierversion of this paper. We also thank Khrystyna Kyrylych for her comments used in thepreparation of the revised version of the paper.
References
[1] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and
Mathematical Tables, 9th printing, Dover, New York, 1972.
[2] S. F. Arnold, Mathematical Statistics, Prentice-Hall, New Jersey, 1990.[3] J. Bai and S. Shi, Estimating High Dimensional Covariance Matrices and its Applications, Annals
of Economics and Finance 12 (2011), 199-215.[4] T. Bodnar and A. K. Gupta, Estimation of the Precision Matrix of Multivariate Elliptically Con-
toured Stable Distribution, Statistics, 45 (2011), 131-142.
[5] T. Bodnar, A. K. Gupta and N. Parolya, On the Strong Convergence of the Optimal Linear ShrinkageEstimator for Large Dimensional Covariance Matrix, Journal of Multivariate Analysis, 132 (2014a),
215-228.
[6] T. Bodnar, A. K. Gupta and N. Parolya, Direct Shrinkage Estimation of Large Dimensional Pre-cision Matrix, Forthcoming in Journal of Multivariate Analysis, (2015a).
[7] T. Bodnar, S. Mazur and K. Podgorski, Singular Wishart Distribution and its Application to
Portfolio Theory, Forthcoming in Journal of Multivariate Analysis, (2015b).[8] T. Bodnar, S. Mazur and Y. Okhrin, On the Exact and Approximate Distributions of the Product
of a Wishart Matrix with a Normal Vector, Journal of Multivariate Analysis, 125 (2013), 176-189.
[9] T. Bodnar, S. Mazur and Y. Okhrin, Distribution of the Product of Singular Wishart Matrix andNormal Vector, Theory of Probability and Mathematical Statistics, 91 (2014b), 1-14.
[10] T. Bodnar and Y. Okhrin, Properties of the Partitioned Singular, Inverse and Generalized WishartDistributions. Journal of Multivariate Analysis, 99 (2008), 2389-2405.
10 I. KOTSIUBA AND S. MAZUR
[11] T. Bodnar and Y. Okhrin, On the Product of Inverse Wishart and Normal Distributions with
Applications to Discriminant Analysis and Portfolio Theory, Scandinavian Journal of Statistics 38(2011), 311-331.
[12] T. Bodnar and W. Schmid, A Test for the Weights of the Global Minimum Variance Portfolio in
an Elliptical Model, Metrika, 67 (2008), 127–143.[13] T. Cai, W. Lui and X. Luo, A Constrained l1 Minimization Approach to Sparce Presicion Matrix
Estimation, Journal of the American Statistical Association 106 (2011), 594-607.[14] T. Cai and M. Yuan, Adaptive Covariance Matrix Estimation Through Block Thresholding, Annals
of Statistics 40 (2012), 2014-2042.
[15] T. Cai and H. Zhou, Minimax Estimation of Large Covariance Matrices under l1 Norm, StatisticaSinica 22 (2012), 1319-1378.
[16] J. A. Dıaz-Garcıa, R. Gutierrez-Jaimez and K.V. Mardia, Wishart and Pseudo-Wishart Distribu-
tions and Some Applications to Shape Theory, Journal of Multivariate Analysis, 63 (1997), 73-87.[17] M. Drton, H. Massam and I. Olkin, Moments of Minors of Wishart Matrices, Annals of Statistics,
36 (2008), 2261-2283.
[18] D. A. Harville, Matrix Algebra from Statistician’s Perspective, Springer, New York, 1997.[19] D. von Rosen, Moments for the Inverted Wishart Distribution, Scandinavian Journal of Statistics,
15 (1988), 97-109.
[20] P. Jorion, Bayes-Stein Estimation for Portfolio Analysis, Journal of Financial and QuantativeAnalysis, 21 (1986), 279-292.
[21] O. Ledoit and M. Wolf, A Well-Conditioned Estimator for Large-Dimensional Covariance Matrices.Journal of Multivariate Analysis 88 (2004), 365–411.
[22] K.V. Mardia, J.T. Kent and J.M. Bibby, Multivariate Analysis, Academic Press, London, 1979.
[23] R.J. Muirhead, Aspects of Multivariate Statistical Theory, Wiley, New York, 1982.[24] C. Stein, Inadmissibility of the Usual Estimator of the Mean of a Multivariate Normal Distribution,
Proceedings of the Third Berkeley Symposium on Mathematical and Statistical Probability, (ed. J.
Neyman), University of California, Berkeley, 197-206, 1956.[25] G.P.H. Styan, Three Useful Expressions for Expectations Involving a Wishart Matrix and its In-
verse, In Statistical Data Analysis and Inference (ed. Y. Dodge), Elsevier Science, Amsterdam,
283-296, 1989.
Department of Probability Theory and Statistics, Ivan Franko National University ofLviv, Universytetska 1, 79000, Lviv, Ukraine
E-mail address: [email protected]
Department of Statistics, Lund University, PO Box 743, SE-22007 Lund, Sweden
E-mail address: [email protected]
Working Papers in Statistics 2015LUND UNIVERSITYSCHOOL OF ECONOMICS AND MANAGEMENTDepartment of StatisticsBox 743220 07 Lund, Sweden
http://journals.lub.lu.se/stat