- Lack-of-fit sum of squares
statistics, a sum of squares due to lack of fit, or more tersely a lack-of-fit sum of squares, is one of the components of a partition of the sum of squares in an analysis of variance, used in the numeratorin an F-testof the null hypothesisthat says that a proposed model fits well.
Sketch of the idea
In order to have a lack-of-fit sum of squares one observes more than one value of the
response variablefor each value of the set of predictor variables. For example, consider fitting a line
by the method of
least squares. One takes as estimates of "α" and "β" the values that minimize the sum of squares of residuals, i.e. the sum of squares of the differences between the observed "y"-value and the fitted "y"-value. To have a lack-of-fit sum of squares, one observes more than one "y"-value for each "x"-value. One then partitions the "sum of squares due to error", i.e. the sum of squares of residuals, into two components:
: sum of squares due to error = (sum of squares due to "pure" error) + (sum of squares due to lack of fit).
The sum of squares due to "pure" error is the sum of squares of the differences between each observed "y"-value and the average of all "y"-values corresponding to the same "x"-value.
The sum of squares due to lack of fit is the "weighted" sum of squares of differences between each average of "y"-values corresponding to the same "x"-value and corresponding fitted "y"-value, the weight in each case being simply the number of observed "y"-values for that "x"-value. [Richard J. Brook and Gregory C. Arnold, "Applied Regression Analysis and Experimental Design",
CRC Press, 1985, pages 48–49] [John Neter, Michael H. Kutner, Christopher J. Nachstheim, William Wasserman, "Applied Linear Statistical Models", Fourth Edition, Irwin, 1996, pages 121–122 ]
In order that these two sums be equal, it is necessary that the vector whose components are "pure errors" and the vector of lack-of-fit components be orthogonal to each other, and one may check that they are orthogonal by doing some algebra.
Consider fitting a line:
be the least squares estimates of the unobservable parameters "α" and "β" based on the observed values of "x" "i" and "Y" "i j".
be the fitted values of the response variable. Then
are the residuals, which are observable estimates of the unobservable values of the error term "ε" "ij". Because of the nature of the method of least squares, the whole vector of residuals, with
scalar components, necessarily satisfies the two constraints
It is thus constrained to lie in an ("N" − 2)-dimensional subspace of R "N", i.e. there are "N" − 2 "degrees of freedom for error".
be the average of all "Y"-values associated with a particular "x"-value.
We partition the sum of squares due to error into two components:
Sums of squares
Suppose the error terms "ε" "i j" are independent and normally distributed with
expected value0 and variance"σ"2. We treat "x" "i" as constant rather than random. Then the response variables "Y" "i j" are random only because the errors "ε" "i j" are random.
It can be shown to follow that if the straight-line model is correct, then the sum of squares due to error divided by the error variance,
chi-square distributionwith "N" − 2 degrees of freedom.
* The sum of squares due to pure error, divided by the error variance "σ"2, has a chi-square distribution with "N" − "n" degrees of freedom;
* The sum of squares due to lack of fit, divided by the error variance "σ"2, has a chi-square distribution with "n" − 2 degrees of freedom;
* The two sums of squares are probabilistically independent.
The test statistic
It then follows that the statistic
F-distributionwith the corresponding number of degrees of freedom in the numerator and the denominator, provided that the straight-line model is correct. If the model is wrong, then the probability distribution of the denominator is still as stated above, and the numerator and denominator are still independent. But the numerator then has a non-central chi-square distribution, and consequently the quotient as a whole has a non-central F-distribution.
One uses this F-statistic to test the
null hypothesisthat the straight-line model is right. Since the non-central F-distribution is stochastically larger than the (central) F-distribution, one rejects the null hypothesis if the F-statistic is too big. How big is too big—the critical value—depends on the level of the test and is a percentage point of the F-distribution.
The assumptions of
normal distributionof errors and statistical independencecan be shown to entail that this lack-of-fit test is the likelihood-ratio testof this null hypothesis.
Analysis of variance
Wikimedia Foundation. 2010.
Look at other dictionaries:
Lack-of-fit test — In statistics, a lack of fit test is any of many tests of a null hypothesis that a proposed statistical model fits well. See:* Goodness of fit * Lack of fit sum of squares … Wikipedia
Goodness of fit — The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such… … Wikipedia
Line Of Best Fit — A straight line drawn through the center of a group of data points plotted on a scatter plot. Scatter plots depict the results of gathering data on two variables; the line of best fit shows whether these two variables appear to be correlated. A… … Investment dictionary
Linear regression — Example of simple linear regression, which has one independent variable In statistics, linear regression is an approach to modeling the relationship between a scalar variable y and one or more explanatory variables denoted X. The case of one… … Wikipedia
Errors and residuals in statistics — For other senses of the word residual , see Residual. In statistics and optimization, statistical errors and residuals are two closely related and easily confused measures of the deviation of a sample from its theoretical value . The error of a… … Wikipedia
Outline of regression analysis — In statistics, regression analysis includes any technique for learning about the relationship between one or more dependent variables Y and one or more independent variables X. The following outline is an overview and guide to the variety of… … Wikipedia
List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… … Wikipedia
Analysis of variance — In statistics, analysis of variance (ANOVA) is a collection of statistical models, and their associated procedures, in which the observed variance in a particular variable is partitioned into components attributable to different sources of… … Wikipedia
List of mathematics articles (L) — NOTOC L L (complexity) L BFGS L² cohomology L function L game L notation L system L theory L Analyse des Infiniment Petits pour l Intelligence des Lignes Courbes L Hôpital s rule L(R) La Géométrie Labeled graph Labelled enumeration theorem Lack… … Wikipedia
F-test — An F test is any statistical test in which the test statistic has an F distribution if the null hypothesis is true. The name was coined by George W. Snedecor, in honour of Sir Ronald A. Fisher. Fisher initially developed the statistic as the… … Wikipedia