Minimum distance estimation

Minimum distance estimation

Minimum distance estimation (MDE) is a statistical method for fitting a mathematical model to data, usually the empirical distribution.

Contents

Definition

Let \displaystyle X_1,\ldots,X_n be an independent and identically distributed (iid) random sample from a population with distribution F(x;\theta)\colon \theta\in\Theta and \Theta\subseteq\mathbb{R}^k (k\geq 1).

Let \displaystyle F_n(x) be the empirical distribution function based on the sample.

Let \hat{\theta} be an estimator for \displaystyle \theta. Then F(x;\hat{\theta}) is an estimator for \displaystyle F(x;\theta).

Let d[\cdot,\cdot] be a functional returning some measure of "distance" between the two arguments. The functional \displaystyle d is also called the criterion function.

If there exists a \hat{\theta}\in\Theta such that d[F(x;\hat{\theta}),F_n(x)]=\inf\{d[F(x;\theta),F_n(x)]; \theta\in\Theta\}, then \hat{\theta} is called the minimum distance estimate of \displaystyle \theta.

Statistics used in estimation

Most theoretical studies of minimum distance estimation, and most applications, make use of "distance" measures which underlie already-established goodness of fit tests: the test statistic used in one of these tests is used as the distance measure to be minimised. Below are some examples of statistical tests that have been used for minimum distance estimation.

Chi-square criterion

The chi-square test uses as its criterion the sum, over predefined groups, of the squared difference between the increases of the empirical distribution and the estimated distribution, weighted by the increase in the estimate for that group.

Cramér–von Mises criterion

The Cramér–von Mises criterion uses the integral of the squared difference between the empirical and the estimated distribution functions.

Kolmogorov–Smirnov criterion

The Kolmogorov–Smirnov test uses the supremum of the absolute difference between the empirical and the estimated distribution functions.

Anderson–Darling criterion

The Anderson–Darling test is similar to the Cramér–von Mises criterion except that the integral is of a weighted version of the squared difference, where the weighting relates the variance of the empirical distribution function.

Theoretical results

The theory of minimum distance estimation is related to that for the asymptotic distribution of the corresponding statistical goodness of fit tests. Often the cases of the Cramér–von Mises criterion, the Kolmogorov–Smirnov test and the Anderson–Darling test are treated simultaneously by treating them as special cases of a more general formulation of a distance measure. Examples of the theoretical results that are available are: consistency of the parameter estimates; the asymptotic covariance matrices of the parameter estimates.

See also

References

  • Parr W.C., Schucany W.R. (1980). "Minimum distance and robust estimation". Journal of the American Statistical Association, 75, 616–624.

Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Minimum distance — This article is about minimum distance. For minimum distance estimation, see Minimum distance estimation. The term minimum distance is used in several ways: In geometry, the minimum distance of a collection of points P in a space is the smallest… …   Wikipedia

  • Minimum message length — (MML) is a formal information theory restatement of Occam s Razor: even when models are not equal in goodness of fit accuracy to the observed data, the one generating the shortest overall message is more likely to be correct (where the message… …   Wikipedia

  • Distance — This article is about distance in the mathematical or physical sense. For other senses of the term, see distance (disambiguation). Proximity redirects here. For the 2001 film, see Proximity (film). Distance (or farness) is a numerical description …   Wikipedia

  • Minimum description length — The minimum description length (MDL) principle is a formalization of Occam s Razor in which the best hypothesis for a given set of data is the one that leads to the best compression of the data. MDL was introduced by Jorma Rissanen in 1978. It is …   Wikipedia

  • Estimation theory — is a branch of statistics and signal processing that deals with estimating the values of parameters based on measured/empirical data. The parameters describe an underlying physical setting in such a way that the value of the parameters affects… …   Wikipedia

  • Distance matrices in phylogeny — Distance matrices are used in phylogeny as non parametric distance methods were originally applied to phenetic data using a matrix of pairwise distances. These distances are then reconciled to produce a tree (a phylogram, with informative branch… …   Wikipedia

  • Maximum spacing estimation — The maximum spacing method tries to find a distribution function such that the spacings, D(i), are all approximately of the same length. This is done by maximizing their geometric mean. In statistics, maximum spacing estimation (MSE or MSP), or… …   Wikipedia

  • Maximum a posteriori estimation — In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is a mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to… …   Wikipedia

  • Chebyshev distance — This article is about the finite dimensional vector space distance. For the function space norm, see uniform norm. a b c d e f g h …   Wikipedia

  • Entropy estimation — Estimating the differential entropy of a system or process, given some observations, is useful in various science/engineering applications, such as Independent Component Analysis [Dinh Tuan Pham (2004) Fast algorithms for mutual information based …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”