# Squared deviations

﻿
Squared deviations

In probability theory and statistics, the definition of variance is either the expected value (when considering a theoretical distribution), or average (for actual experimental data) of squared deviations from the mean. Computations for analysis of variance involve the partitioning of a sum of squared deviations. An understanding of the complex computations involved is greatly enhanced by a detailed study of the statistical value:

: $operatorname\left\{E\right\}\left( X ^ 2 \right).$

It is well-known that for a random variable $X$ with mean $mu$ and variance $sigma^2$:

: $sigma^2 = operatorname\left\{E\right\}\left( X ^ 2 \right) - mu^2$ [Mood & Graybill: "An introduction to the Theory of Statistics" (McGraw Hill)]

Therefore

: $operatorname\left\{E\right\}\left( X ^ 2 \right) = sigma^2 + mu^2.$

From the above, the following are readly derived:

: $operatorname\left\{E\right\}left\left( sumleft\left( X ^ 2 ight\right) ight\right) = nsigma^2 + nmu^2$

: $operatorname\left\{E\right\}left\left( left\left(sum X ight\right)^ 2 ight\right) = nsigma^2 + n^2mu^2$

Sample variance

The sum of squared deviations needed to calculate variance (before deciding whether to divide by "n" or "n" − 1) is most easily calculated as

: $S = sum x ^ 2 - left\left(sum x ight\right)^2/n$

From the two derived expectations above the expected value of this sum is

: $operatorname\left\{E\right\}\left(S\right) = nsigma^2 + nmu^2 - \left(nsigma^2 + n^2mu^2\right)/n$

which implies

: $operatorname\left\{E\right\}\left(S\right) = \left(n - 1\right)sigma^2.$

This effectively proves the use of the divisor $\left(n - 1\right)$ in the calculation of an unbiased sample estimate of $sigma^2$

Partition &mdash; analysis of variance

In the situation where data is available for "k" different treatment groups having size "ni" where "i" varies from 1 to "k", then it is assumed that the expected mean of each group is

: $operatorname\left\{E\right\}\left(mu_i\right) = mu + T_i$

and the variance of each treatment group is unchanged from the population variance $sigma^2$.

Under the Null Hyporthesis that the treatments have no effect, then each of the $T_i$ will be zero.

It is now possible to calculate three sums of squares:

;Individual

:$I = sum x^2$

:$operatorname\left\{E\right\}\left(I\right) = nsigma^2 + nmu^2$

;Treatments

:$T = sum_\left\{i=1\right\}^k left\left(left\left(sum x ight\right)^2/n_i ight\right)$

:$operatorname\left\{E\right\}\left(T\right) = ksigma^2 + sum_\left\{i=1\right\}^k n_i\left(mu + T_i\right)^2$

:$operatorname\left\{E\right\}\left(T\right) = ksigma^2 + nmu^2 + 2mu sum_\left\{i=1\right\}^k \left(n_iT_i\right) + sum_\left\{i=1\right\}^k n_i\left(T_i\right)^2$

Under the null hypothesis that the treatments cause no differences and all the $T_i$ are zero, the expectation simplifies to

:$operatorname\left\{E\right\}\left(T\right) = ksigma^2 + nmu^2.$

;Combination

:$C = left\left(sum x ight\right)^2/n$

:$operatorname\left\{E\right\}\left(C\right) = sigma^2 + nmu^2$

ums of squared deviations

Under the null hypothesis, the difference of any pair of "I", "T", and "C" does not contain any dependency on $mu$, only $sigma^2$.

:$operatorname\left\{E\right\}\left(I - C\right) = \left(n - 1\right)sigma^2$ total squared deviations

:$operatorname\left\{E\right\}\left(T - C\right) = \left(k - 1\right)sigma^2$ treatment squared deviations

:$operatorname\left\{E\right\}\left(I - T\right) = \left(n - k\right)sigma^2$ residual squared deviations

The constants ("n" − 1), ("k" − 1), and ("n" − "k") are normally referred to as the number of degrees of freedom.

Example

In a very simple example, 5 observations arise from two treatments. The first treatment gives three values 1, 2, and 3, and the second treatment gives two values 4, and 6.

:$I = frac\left\{1^2\right\}\left\{1\right\} + frac\left\{2^2\right\}\left\{1\right\} + frac\left\{3^2\right\}\left\{1\right\} + frac\left\{4^2\right\}\left\{1\right\} + frac\left\{6^2\right\}\left\{1\right\} = 66$

:$T = frac\left\{\left(1 + 2 + 3\right)^2\right\}\left\{3\right\} + frac\left\{\left(4 + 6\right)^2\right\}\left\{2\right\} = 12 + 50 = 62$

:$C = frac\left\{\left(1 + 2 + 3 + 4 + 6\right)^2\right\}\left\{5\right\} = 256/5 = 51.2$

Giving

: Total squared deviations = 66 − 51.2 = 14.8 with 4 degrees of freedom.: Treatment squared deviations = 62 − 51.2 = 10.8 with 1 degree of freedom.: Residual squared deviations = 66 − 62 = 4 with 3 degrees of freedom.

Two-way analysis of variance

The following hypothetical example gives the yields of 15 plants subject to two different environmental variations, and three different fertilisers.

ee also

* Variance decomposition
* Errors and residuals in statistics

References

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• Mean squared error — In statistics, the mean squared error (MSE) of an estimator is one of many ways to quantify the difference between values implied by a kernel density estimator and the true values of the quantity being estimated. MSE is a risk function,… …   Wikipedia

• Mean squared prediction error — In statistics the mean squared prediction error of a smoothing procedure is the expected sum of squared deviations of the fitted values from the (unobservable) function g. If the smoothing procedure has operator matrix L, then The MSPE can be… …   Wikipedia

• Pearson's chi-squared test — (χ2) is the best known of several chi squared tests – statistical procedures whose results are evaluated by reference to the chi squared distribution. Its properties were first investigated by Karl Pearson in 1900.[1] In contexts where it is… …   Wikipedia

• D'Agostino's K-squared test — In statistics, D’Agostino’s K2 test is a goodness of fit measure of departure from normality, that is the test aims to establish whether or not the given sample comes from a normally distributed population. The test is based on transformations of …   Wikipedia

• Standard deviation — In probability and statistics, the standard deviation is a measure of the dispersion of a collection of values. It can apply to a probability distribution, a random variable, a population or a data set. The standard deviation is usually denoted… …   Wikipedia

• statistics — /steuh tis tiks/, n. 1. (used with a sing. v.) the science that deals with the collection, classification, analysis, and interpretation of numerical facts or data, and that, by use of mathematical theories of probability, imposes order and… …   Universalium

• Variance — In probability theory and statistics, the variance of a random variable, probability distribution, or sample is one measure of statistical dispersion, averaging the squared distance of its possible values from the expected value (mean). Whereas… …   Wikipedia

• Descriptive statistics — quantitatively describe the main features of a collection of data.[1] Descriptive statistics are distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aim to summarize a data set, rather than use the… …   Wikipedia

• Deviation (statistics) — In mathematics and statistics, deviation is a measure of difference for interval and ratio variables between the observed value and the mean. The sign of deviation (positive or negative), reports the direction of that difference (it is larger… …   Wikipedia

• List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia