Studentized residual

Studentized residual

In statistics, a studentized residual, named in honor of William Sealey Gosset, who wrote under the pseudonym "Student", is a residual adjusted by dividing it by an estimate of its standard deviation. Studentization of residuals is an important technique in the detection of outliers.

Errors versus residuals

It is very important to understand the difference between errors and residuals in statistics. Consider the simple linear regression model

:Y_i=alpha_0+alpha_1 x_i+varepsilon_i,

where the errors ε"i", "i" = 1, ..., "n", are independent and all have the same variance σ2. The residuals are not the true, and unobservable, errors, but rather are "estimates", based on the observable data, of the errors. When the method of least squares is used to estimate α0 and α1, then the residuals scriptstylewidehatvarepsilon, unlike the errors scriptstylevarepsilon, cannot be independent since they satisfy the two constraints

:sum_{i=1}^n widehat{varepsilon}_i=0

and

:sum_{i=1}^n widehat{varepsilon}_i x_i=0.

(Here varepsilon_i is the "i"th error, and widehat{varepsilon}_i is the "i"th residual.) Moreover, the residuals, unlike the errors, do not all have the same variance: the variance decreases as the corresponding "x"-value gets farther from the average "x"-value. "The fact that the variances of the residuals differ, even though the variances of the true errors are all equal to each other, is the principal reason for the need for studentization."

How to studentize

For this simple model, the design matrix is

:X=left [egin{matrix}1 & x_1 \ vdots & vdots \ 1 & x_n end{matrix} ight]

and the hat matrix "H" is the matrix of the orthogonal projection onto the column space of the design matrix:

:H=X(X^T X)^{-1}X^T.,

The "leverage" "h""ii" is the "i"th diagonal entry in the hat matrix. The variance of the "i"th residual is

:mbox{var}(widehat{varepsilon}_i)=sigma^2(1-h_{ii}).

The corresponding studentized residual is then

:{widehat{varepsilon}_iover widehat{sigma} sqrt{1-h_{ii}

where widehat{sigma} is an appropriate estimate of σ.

Internal and external studentization

The estimate of σ2 is

:widehat{sigma}^2={1 over n-m}sum_{j=1}^n widehat{varepsilon}_j^{,2}.

where "m" is the number of parameters in the model (2 in our example).But it is desirable to exclude the "i"th observation from the process of estimating the variance when one is considering whether the "i"th case may be an outlier. Consequently one may use the estimate

:widehat{sigma}_{(i)}^2={1 over n-m-1}sum_{egin{smallmatrix}j = 1\j e iend{smallmatrix^n widehat{varepsilon}_j^{,2},

based on all but the "i"th case. If the latter estimate is used, "excluding" the "i"th case, then the residual is said to be "externally studentized"; if the former is used, "including" the "i"th case, then it is "internally studentized".

If the errors are independent and normally distributed with expected value 0 and variance σ2, then the probability distribution of the "i"th externally studentized residual is a Student's t-distribution with "n" − "m" − 1 degrees of freedom, and can range from scriptstyle-infty to scriptstyle+infty.

On the other hand, the internally studentized residuals are in the range scriptstyle 0 ,pm, sqrt{mathrm{r.d.f., where r.d.f. is the number of residual degrees of freedom, namely "n" − "m". If "i.s.r." represents the internally studentized residual, and again assuming that the errors are independent identically distributed Gaussian variables, then

:mathrm{i.s.r.}^2 = mathrm{r.d.f.}{t^2 over t^2+mathrm{r.d.f.}-1}

where "t" is distributed as Student's t-distribution with r.d.f. − 1 degrees of freedom. In fact, this implies that i.s.r.2/r.d.f. follows the beta distribution "B"(1/2,(r.d.f. − 1)/2). When r.d.f. = 3, the internally studentized residuals are uniformly distributed between scriptstyle-sqrt{3} and scriptstyle+sqrt{3}.

If there is only one residual degree of freedom, the above formula for the distribution of internally studentized residuals doesn't apply. In this case, the i.s.r.'s are all either +1 or −1, with 50% chance for each.

The standard deviation of the distribution of internally studentized residuals is always 1, but this does not imply that the standard deviation of all the i.s.r.'s of a particular experiment is 1.

References

* "Residuals and Influence in Regression", R. Dennis Cook, New York : Chapman and Hall, 1982.


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Studentized range — In statistics, the studentized range computed from a list x 1, ..., x n of numbers is: frac{max{,x 1,dots,x n,} min{,x 1,dots,x n,{s}, where: s^2 = frac{1}{n 1}sum {i=1}^n (x i overline{x})^2, is the sample variance and: overline{x} = frac{x 1 +… …   Wikipedia

  • Residual — In general, a residual is what is left over.* Residual (mathematics) * Residual (American Rock Band) * Errors and residuals in statistics *Residual payment, in business, one of an ongoing stream of payments for the completion of past achievements …   Wikipedia

  • Errors and residuals in statistics — For other senses of the word residual , see Residual. In statistics and optimization, statistical errors and residuals are two closely related and easily confused measures of the deviation of a sample from its theoretical value . The error of a… …   Wikipedia

  • Linear least squares (mathematics) — This article is about the mathematics that underlie curve fitting using linear least squares. For statistical regression analysis using least squares, see linear regression. For linear regression on a single variable, see simple linear regression …   Wikipedia

  • DFFITS — is a diagnostic meant to show how influential a point is in a statistical regression. It was proposed in 1980.[1] It is defined as the change ( DFFIT ), in the predicted value for a point, obtained when that point is left out of the regression,… …   Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

  • Deviation (statistics) — In mathematics and statistics, deviation is a measure of difference for interval and ratio variables between the observed value and the mean. The sign of deviation (positive or negative), reports the direction of that difference (it is larger… …   Wikipedia

  • Normalization (statistics) — For other uses, see Standard score and Normalizing constant. In one usage in statistics, normalization is the process of isolating statistical error in repeated measured data. A normalization is sometimes based on a property. Quantile… …   Wikipedia

  • Ordinary least squares — This article is about the statistical properties of unweighted linear regression analysis. For more general regression analysis, see regression analysis. For linear regression on a single variable, see simple linear regression. For the… …   Wikipedia

  • Linear least squares — is an important computational problem, that arises primarily in applications when it is desired to fit a linear mathematical model to measurements obtained from experiments. The goals of linear least squares are to extract predictions from the… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”