Upload
titus-moulder
View
223
Download
4
Embed Size (px)
Citation preview
DEFINITIONSAn estimator of a population parameter is a random variable that depends on the sample information and whose realizations provide approximations to this unknown parameter. A Spescific realization of that random variable is called an estimate.
A point estimator of a population parameter is a function of the sample information that yields a single number. The corresponding realization is called the point estimate of the parameter.
DEFINITIONSPOPULATION PARAMETER ESTIMATOR ESTIMATE
Mean ( )
Variance ( )
StandartDeviation ( )
Proportion ( )
n
XX
n
i i 1 x
21
)(1
22
n
XXS
n
i i 2s
1
)(1
2
n
XXS
n
i i s
Pn
XP ˆ p
PROPERTIES OF GOOD POINT ESTIMATORS
A good estimator must satisfy three conditions:
Unbiased: The estimator is said to be an unbiased estimator of the parameter if the mean of the sampling distribution of is . In the other words the expected value of the estimator must be equal to the mean of the parameter
)ˆ(E
The sample mean, variance and proportion are unbiased estimators of the corresponding population quantities.
In general, the sample standart deviation is not an unbiased estimator of the population standart deviation.
UNBIASEDNESS OF SOME ESTIMATORS
Let be an estimator of . The bias in is defined as the difference between its mean and ; that is
It follows that the bias of an unbiased estimator is 0.
)ˆ()ˆ( EBias
EFFICIENCYLet and be two unbiased estimators of ,based on the same number of sample observations. Then
(i) is said to be more efficient than if
(ii) The relative efficiency of one estimator with respect to the other is the ratio of their variances; that is
Relative efficiency=
1 2
1 2)ˆ()ˆ( 21 VarVar
)ˆ(
)ˆ(
1
2
Var
Var
EFFICIENCY
is the more efficient estimator.
1
If is an unbiased estimator of , and no other unbiased estimator has smaller variance, then is said to be most efficient or minimum variance unbiased estimator of .
CHOICE OF POINT ESTIMATOR There are estimation problems for which no unbiased estimator is very satisfactory and for which there may be much to be gained from the sacrifice of accepting little bias. One measure of the expected closeness of an estimator to a parameter is its mean squared error – the expectation of the squared difference between the estimator and the parameter, that is
It can be shown that,
2ˆ)ˆ( EMSE
2ˆ)ˆ()ˆ( BiasVarMSE
CONSISTENCY Consistency also desirable is that an estimate tend to lie nearer the population characteristic as the sample size becomes larger. This is the basis of the property of consistency.
An estimator is a consistent estimator of a population characteristic if the larger the sample size, the more likely it is that the estimate will be close to .
INTERVAL ESTIMATION An interval estimator for apopulation parameter is a rule for determining (based on sample information) a range, or interval, in which the parameter is likely to fall. The corresponding estimate is called an interval estimate.
Let be an unknown parameter. Suppose that on the basis of sample information, we can find random variables A and B such that
If the specific sample realizations of A and B are denoted by a and b ,then the interval from a to b is called a 100(1-α)% confidence interval for . The quantity is called the probability content or level of confidence, of the interval.
If the population was repeatedly sampled a very large number of times, the parameter would be contained in 100(1-α)% of intervals calculated this way.
1)( BAP
)1(
ESTIMATION FOR FINITE POPULATIONS
When sample is large relative to population,
n/N>0,05
Use finite population correction factor;
1.
1. 1,2/1,2/
N
nN
n
StX
N
nN
n
StX nXn