Nuisance parameter

Nuisance parameter

In statistics, a nuisance parameter is any parameter which is not of immediate interest but which must be accounted for in the analysis of those parameters which are of interest. The classic example of a nuisance parameter is the variance, σ2, of a normal distribution, when the mean, μ, is of primary interest.

Nuisance parameters are often variances, but not always; for example in an errors-in-variables model, the unknown true location of each observation is a nuisance parameter. In general, any parameter which intrudes on the analysis of another may be considered a nuisance parameter. A parameter may also cease to be a "nuisance" if it becomes the object of study, as the variance of a distribution may be.

Contents

Theoretical statistics

The general treatment of nuisance parameters can be broadly similar between frequentist and Bayesian approaches to theoretical statistics. It relies on an attempt to partition the likelihood function into components representing information about the parameters of interest and information about the other (nuisance) parameters. This can involve ideas about sufficient statistics and ancillary statistics. When this partition can be achieved it may be possible to complete a Bayesian analysis for the parameters of interest by determining their joint posterior distribution algebraically. The partition allows frequentist theory to develop general estimation approaches in the presence of nuisance parameters. If the partition cannot be achieved it may still be possible to make use of an approximate partition.

In some special cases, it is possible to formulate methods that circumvent the presences of nuisance parameters. The t-test provides a practically useful test because the test statistic does not depend on the unknown variance. It is a case where use can be made of a pivotal quantity. However, in other cases no such circumvention is known.

Practical statistics

Practical approaches to statistical analysis treat nuisance parameters somewhat differently in frequentist and Bayesian methodologies.

A general approach in a frequentist analysis can be based on maximum likelihood-ratio tests. These provide both significance tests and confidence intervals for the parameters of interest which are approximately valid for moderate to large sample sizes and which take account of the presence of nuisance parameters.

In Bayesian analysis, a generally applicable approach creates random samples from the joint posterior distribution of all the parameters: see Markov chain Monte Carlo. Given these, the joint distribution of only the parameters of interest can be readily found by marginalizing over the nuisance parameters. However, this approach may not always be computationally efficient if some or all of the nuisance parameters can be eliminated on a theoretical basis.

See Also

References


Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • Nuisance variable — In the theory of stochastic processes in probability theory and statistics, a nuisance variable is a random variable which is fundamental to the probabilistic model, but which is of no particular interest in itself or which is no longer of… …   Wikipedia

  • Nuisance — For the 1921 film, see The Nuisance. For statistics, see Nuisance parameter. Nuisance (also spelled nocence, through Fr. noisance, nuisance, from Lat. nocere, to hurt ) is a common law tort. It means that which causes offence, annoyance, trouble… …   Wikipedia

  • Optimal design — This article is about the topic in the design of experiments. For the topic in optimal control theory, see shape optimization. Gustav Elfving developed the optimal design of experiments, and so minimized surveyors need for theodolite measurements …   Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

  • Semiparametric model — In statistics a semiparametric model is a model that has parametric and nonparametric components.A model is a collection of distributions: {P heta: heta in Theta} indexed by a parameter heta. * A parametric model is one in which the indexing… …   Wikipedia

  • Marginal likelihood — In statistics, a marginal likelihood function, or integrated likelihood, is a likelihood function in which some parameter variables have been marginalised. It may also be referred to as evidence, but this usage is somewhat idiosyncratic. Given a… …   Wikipedia

  • Ordinary least squares — This article is about the statistical properties of unweighted linear regression analysis. For more general regression analysis, see regression analysis. For linear regression on a single variable, see simple linear regression. For the… …   Wikipedia

  • Unsolved problems in statistics — There are many longstanding unsolved problems in mathematics for which a solution has still not yet been found. The unsolved problems in statistics are generally of a different flavor; according to John Tukey, difficulties in identifying problems …   Wikipedia

  • Quasi-maximum likelihood — A quasi maximum likelihood estimate (QMLE, also known as a pseudo likelihood estimate or a composite likelihood estimate ) is an estimate of a parameter θ in a statistical model that is formed by maximizing a function that is related to the… …   Wikipedia

  • Donald Andrews — Donald Wilfrid Kao Andrews (born 1955) is a Canadian economist. He is the Tjalling Koopmans Professor of Economics at the Cowles Foundation, Yale University. Born in Vancouver, he received his B.A. in 1977 at the University of British Columbia,… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”