Lack-of-fit sum of squares

Lack-of-fit sum of squares

In statistics, a sum of squares due to lack of fit, or more tersely a lack-of-fit sum of squares, is one of the components of a partition of the sum of squares in an analysis of variance, used in the numerator in an F-test of the null hypothesis that says that a proposed model fits well.

Sketch of the idea

In order to have a lack-of-fit sum of squares one observes more than one value of the response variable for each value of the set of predictor variables. For example, consider fitting a line

: y = alpha x + eta ,

by the method of least squares. One takes as estimates of "α" and "β" the values that minimize the sum of squares of residuals, i.e. the sum of squares of the differences between the observed "y"-value and the fitted "y"-value. To have a lack-of-fit sum of squares, one observes more than one "y"-value for each "x"-value. One then partitions the "sum of squares due to error", i.e. the sum of squares of residuals, into two components:

: sum of squares due to error = (sum of squares due to "pure" error) + (sum of squares due to lack of fit).

The sum of squares due to "pure" error is the sum of squares of the differences between each observed "y"-value and the average of all "y"-values corresponding to the same "x"-value.

The sum of squares due to lack of fit is the "weighted" sum of squares of differences between each average of "y"-values corresponding to the same "x"-value and corresponding fitted "y"-value, the weight in each case being simply the number of observed "y"-values for that "x"-value. [Richard J. Brook and Gregory C. Arnold, "Applied Regression Analysis and Experimental Design", CRC Press, 1985, pages 48–49] [John Neter, Michael H. Kutner, Christopher J. Nachstheim, William Wasserman, "Applied Linear Statistical Models", Fourth Edition, Irwin, 1996, pages 121–122 ]

: egin{align}& quad sum ( ext{observed value} - ext{fitted value})^2 & & ext{(error)} \& = sum ( ext{observed value} - ext{local average})^2 & & ext{(pure error)} \& {} quad {} + sum ext{weight} imes ( ext{local average} - ext{fitted value})^2. & & ext{(lack of fit)}end{align}

In order that these two sums be equal, it is necessary that the vector whose components are "pure errors" and the vector of lack-of-fit components be orthogonal to each other, and one may check that they are orthogonal by doing some algebra.

Mathematical details

Consider fitting a line:

: Y_{ij} = alpha x_i + eta + varepsilon_{ij},qquad i = 1,dots n,quad j = 1,dots,n_i.

Let

: widehatalpha, widehateta ,

be the least squares estimates of the unobservable parameters "α" and "β" based on the observed values of "x" "i" and "Y" "i j".

Let

: widehat Y_i = widehatalpha x_i + widehateta ,

be the fitted values of the response variable. Then

: widehatvarepsilon_{ij} = Y_{ij} - widehat Y_i ,

are the residuals, which are observable estimates of the unobservable values of the error term "ε" "ij". Because of the nature of the method of least squares, the whole vector of residuals, with

: N = sum_{i=1}^n n_i

scalar components, necessarily satisfies the two constraints

: sum_{i=1}^n sum_{j=1}^{n_i} widehatvarepsilon_{ij} = 0 ,

: sum_{i=1}^n left(x_i sum_{j=1}^{n_i} widehatvarepsilon_{ij} ight) = 0. ,

It is thus constrained to lie in an ("N" − 2)-dimensional subspace of R "N", i.e. there are "N" − 2 "degrees of freedom for error".

Now let

: overline{Y}_{iullet} = frac{1}{n_i} sum_{j=1}^{n_i} Y_{ij}

be the average of all "Y"-values associated with a particular "x"-value.

We partition the sum of squares due to error into two components:

:egin{align}& sum_{i=1}^n sum_{j=1}^{n_i} widehatvarepsilon_{ij}^{,2}= sum_{i=1}^n sum_{j=1}^{n_i} left( Y_{ij} - widehat Y_i ight)^2 \& = underbrace{ sum_{i=1}^n sum_{j=1}^{n_i} left(Y_{ij} - overline Y_{iullet} ight)^2 }_ ext{(sum of squares due to pure error)}+ underbrace{ sum_{i=1}^n n_i left( overline Y_{iullet} - widehat Y_i ight)^2. }_ ext{(sum of squares due to lack of fit)}end{align}

Probability distributions

Sums of squares

Suppose the error terms "ε" "i j" are independent and normally distributed with expected value 0 and variance "σ"2. We treat "x" "i" as constant rather than random. Then the response variables "Y" "i j" are random only because the errors "ε" "i j" are random.

It can be shown to follow that if the straight-line model is correct, then the sum of squares due to error divided by the error variance,

: frac{1}{sigma^2}sum_{i=1}^n sum_{j=1}^{n_i} widehatvarepsilon_{ij}^{,2}

has a chi-square distribution with "N" − 2 degrees of freedom.

Moreover:

* The sum of squares due to pure error, divided by the error variance "σ"2, has a chi-square distribution with "N" − "n" degrees of freedom;
* The sum of squares due to lack of fit, divided by the error variance "σ"2, has a chi-square distribution with "n" − 2 degrees of freedom;
* The two sums of squares are probabilistically independent.

The test statistic

It then follows that the statistic

: F = frac{left.sum_{i=1}^n n_i left( overline Y_{iullet} - widehat Y_i ight)^2 ight/ (n-2)}{left.sum_{i=1}^n sum_{j=1}^{n_i} left(Y_{ij} - overline Y_{iullet} ight)^2 ight/ (N - n)}

has an F-distribution with the corresponding number of degrees of freedom in the numerator and the denominator, provided that the straight-line model is correct. If the model is wrong, then the probability distribution of the denominator is still as stated above, and the numerator and denominator are still independent. But the numerator then has a non-central chi-square distribution, and consequently the quotient as a whole has a non-central F-distribution.

One uses this F-statistic to test the null hypothesis that the straight-line model is right. Since the non-central F-distribution is stochastically larger than the (central) F-distribution, one rejects the null hypothesis if the F-statistic is too big. How big is too big—the critical value—depends on the level of the test and is a percentage point of the F-distribution.

The assumptions of normal distribution of errors and statistical independence can be shown to entail that this lack-of-fit test is the likelihood-ratio test of this null hypothesis.

See also

* F-test
* Analysis of variance
* Linear regression

Notes


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Lack-of-fit test — In statistics, a lack of fit test is any of many tests of a null hypothesis that a proposed statistical model fits well. See:* Goodness of fit * Lack of fit sum of squares …   Wikipedia

  • Goodness of fit — The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such… …   Wikipedia

  • Line Of Best Fit — A straight line drawn through the center of a group of data points plotted on a scatter plot. Scatter plots depict the results of gathering data on two variables; the line of best fit shows whether these two variables appear to be correlated. A… …   Investment dictionary

  • Linear regression — Example of simple linear regression, which has one independent variable In statistics, linear regression is an approach to modeling the relationship between a scalar variable y and one or more explanatory variables denoted X. The case of one… …   Wikipedia

  • Errors and residuals in statistics — For other senses of the word residual , see Residual. In statistics and optimization, statistical errors and residuals are two closely related and easily confused measures of the deviation of a sample from its theoretical value . The error of a… …   Wikipedia

  • Outline of regression analysis — In statistics, regression analysis includes any technique for learning about the relationship between one or more dependent variables Y and one or more independent variables X. The following outline is an overview and guide to the variety of… …   Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

  • Analysis of variance — In statistics, analysis of variance (ANOVA) is a collection of statistical models, and their associated procedures, in which the observed variance in a particular variable is partitioned into components attributable to different sources of… …   Wikipedia

  • List of mathematics articles (L) — NOTOC L L (complexity) L BFGS L² cohomology L function L game L notation L system L theory L Analyse des Infiniment Petits pour l Intelligence des Lignes Courbes L Hôpital s rule L(R) La Géométrie Labeled graph Labelled enumeration theorem Lack… …   Wikipedia

  • F-test — An F test is any statistical test in which the test statistic has an F distribution if the null hypothesis is true. The name was coined by George W. Snedecor, in honour of Sir Ronald A. Fisher. Fisher initially developed the statistic as the… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”