Minimum-variance unbiased estimator

Minimum-variance unbiased estimator

In statistics a uniformly minimum-variance unbiased estimator or minimum-variance unbiased estimator (UMVUE or MVUE) is an unbiased estimator that has lower variance than any other unbiased estimator for all possible values of the parameter.

The question of determining the UMVUE, if one exists, for a particular problem is important for practical statistics, since less-than-optimal procedures would naturally be avoided, other things being equal. This has led to substantial development of statistical theory related to the problem of optimal estimation. While the particular specification of "optimal" here — requiring unbiasedness and measuring "goodness" using the variance — may not always be what is wanted for any given practical situation, it is one where useful and generally applicable results can be found.

Contents

Definition

Consider estimation of g(θ) based on data X_1, X_2, \ldots, X_n i.i.d. from some member of a family of densities  p_\theta, \theta \in \Omega, where Ω is the parameter space. An unbiased estimator \delta(X_1, X_2, \ldots, X_n) of g(θ) is UMVU if  \forall \theta \in \Omega,

 \mathrm{var}(\delta(X_1, X_2, \ldots, X_n)) \leq \mathrm{var}(\tilde{\delta}(X_1, X_2, \ldots, X_n))

for any other unbiased estimator  \tilde{\delta}.

If an unbiased estimator of g(θ) exists, then one can prove there is an essentially unique MVUE. Using the Rao–Blackwell theorem one can also prove that determining the MVUE is simply a matter of finding a complete sufficient statistic for the family p_\theta, \theta \in \Omega and conditioning any unbiased estimator on it.

Further, by the Lehmann–Scheffé theorem, an unbiased estimator that is a function of a complete, sufficient statistic is the UMVU estimator.

Put formally, suppose \delta(X_1, X_2, \ldots, X_n) is unbiased for g(θ), and that T is a complete sufficient statistic for the family of densities. Then

 \eta(X_1, X_2, \ldots, X_n) = \mathrm{E}(\delta(X_1, X_2, \ldots, X_n)|T)\,

is the MVUE for g(θ).

A Bayesian analog is a Bayes estimator, particularly with minimum mean square error (MMSE).

Estimator selection

An efficient estimator need not exist, but if it does and if it unbiased, it is the MVUE. Since the mean squared error (MSE) of an estimator δ is

 \operatorname{MSE}(\delta) = \mathrm{var}(\delta) +[ \mathrm{bias}(\delta)]^{2}\

the MVUE minimizes MSE among unbiased estimators. In some cases biased estimators have lower MSE because they have a smaller variance than does any unbiased estimator; see estimator bias.

Example

Consider the data to be a single observation from an absolutely continuous distribution on \mathbb{R} with density

 p_\theta(x) = \frac{ \theta e^{-x} }{(1 + e^{-x})^{\theta + 1} }

and we wish to find the UMVU estimator of

 g(\theta) = \frac{1}{\theta^{2}}

First we recognize that the density can be written as

 \frac{ e^{-x} } { 1 + e^{-x} } \exp( -\theta \log(1 + e^{-x}) + \log(\theta))

Which is an exponential family with sufficient statistic T = log(1 + e x). In fact this is a full rank exponential family, and therefore T is complete sufficient. See exponential family for a derivation which shows

 \mathrm{E}(T) = \frac{1}{\theta},\quad \mathrm{var}(T) = \frac{1}{\theta^{2}}

Therefore

 \mathrm{E}(T^2) = \frac{2}{\theta^{2}}

Clearly  \delta(X) = \frac{T^2}{2} is unbiased, thus the UMVU estimator is

 \eta(X) = \mathrm{E}(\delta(X) | T) = \mathrm{E} \left( \left. \frac{T^2}{2} \,\right|\, T \right) = \frac{T^{2}}{2} = \frac{\log(1 + e^{-X})^{2}}{2}

This example illustrates that an unbiased function of the complete sufficient statistic will be UMVU.

Other examples

  • For a normal distribution with unknown mean and variance, the sample mean and (unbiased) sample variance are the MVUEs for the population mean and population variance.
    However, the sample standard deviation is not unbiased for the population standard deviation – see unbiased estimation of standard deviation.
    Further, for other distributions the sample mean and sample variance are not in general MVUEs – for a uniform distribution with unknown upper and lower bounds, the mid-range is the MVUE for the population mean.
  • If k exemplars are chosen (without replacement) from a discrete uniform distribution over the set {1, 2, ..., N} with unknown upper bound N, the MVUE for N is
\frac{k+1}{k} m - 1,
where m is the sample maximum. This is a scaled and shifted (so unbiased) transform of the sample maximum, which is a sufficient and complete statistic. See German tank problem for details.

See also

Bayesian analogs

References

  • Keener, Robert W. (2006). Statistical Theory: Notes for a Course in Theoretical Statistics. Springer. pp. 47–48, 57–58. 

Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Estimator — In statistics, an estimator is a function of the observable sample data that is used to estimate an unknown population parameter (which is called the estimand ); an estimate is the result from the actual application of the function to a… …   Wikipedia

  • Minimum mean square error — In statistics and signal processing, a minimum mean square error (MMSE) estimator describes the approach which minimizes the mean square error (MSE), which is a common measure of estimator quality. The term MMSE specifically refers to estimation… …   Wikipedia

  • Variance — In probability theory and statistics, the variance of a random variable, probability distribution, or sample is one measure of statistical dispersion, averaging the squared distance of its possible values from the expected value (mean). Whereas… …   Wikipedia

  • Minimum distance estimation — (MDE) is a statistical method for fitting a mathematical model to data, usually the empirical distribution. Contents 1 Definition 2 Statistics used in estimation 2.1 Chi square criterion …   Wikipedia

  • Minimum message length — (MML) is a formal information theory restatement of Occam s Razor: even when models are not equal in goodness of fit accuracy to the observed data, the one generating the shortest overall message is more likely to be correct (where the message… …   Wikipedia

  • Minimum description length — The minimum description length (MDL) principle is a formalization of Occam s Razor in which the best hypothesis for a given set of data is the one that leads to the best compression of the data. MDL was introduced by Jorma Rissanen in 1978. It is …   Wikipedia

  • Analysis of variance — In statistics, analysis of variance (ANOVA) is a collection of statistical models, and their associated procedures, in which the observed variance in a particular variable is partitioned into components attributable to different sources of… …   Wikipedia

  • Mean squared error — In statistics, the mean squared error (MSE) of an estimator is one of many ways to quantify the difference between values implied by a kernel density estimator and the true values of the quantity being estimated. MSE is a risk function,… …   Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

  • Estimation theory — is a branch of statistics and signal processing that deals with estimating the values of parameters based on measured/empirical data. The parameters describe an underlying physical setting in such a way that the value of the parameters affects… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”