Normality test

Normality test

In statistics, normality tests are used to determine whether a data set is well-modeled by a normal distribution or not, or to compute how likely an underlying random variable is to be normally distributed.

More precisely, they are a form of model selection, and can be interpreted several ways, depending on one's interpretations of probability:

  • In descriptive statistics terms, one measures a goodness of fit of a normal model to the data – if the fit is poor then the data is not well modeled in that respect by a normal distribution, without making a judgment on any underlying variable.
  • In frequentist statistics statistical hypothesis testing, data are tested against the null hypothesis that it is normally distributed.
  • In Bayesian statistics, one does not "test normality" per se, but rather computes the likelihood that the data comes from a normal distribution with given parameters μ,σ (for all μ,σ), and compares that with the likelihood that the data comes from other distributions under consideration, most simply using Bayes factors (giving the relatively likelihood of seeing the data given different models), or more finely taking a prior distribution on possible models and parameters and computing a posterior distribution given the computed likelihoods.

Contents

Graphical methods

An informal approach to testing normality is to compare a histogram of the sample data to a normal probability curve. The empirical distribution of the data (the histogram) should be bell-shaped and resemble the normal distribution. This might be difficult to see if the sample is small. In this case one might proceed by regressing the data against the quantiles of a normal distribution with the same mean and variance as the sample. Lack of fit to the regression line suggests a departure from normality.

A graphical tool for assessing normality is the normal probability plot, a quantile-quantile plot (QQ plot) of the standardized data against the standard normal distribution. Here the correlation between the sample data and normal quantiles (a measure of the goodness of fit) measures how well the data is modeled by a normal distribution. For normal data the points plotted in the QQ plot should fall approximately on a straight line, indicating high positive correlation. These plots are easy to interpret and also have the benefit that outliers are easily identified.

Back-of-the-envelope test

A simple back-of-the-envelope test takes the sample maximum and minimum and computes their z-score, or more properly t-statistic (number of sample standard deviations that a sample is above or below the sample mean), and compares it to the 68–95–99.7 rule: if one has a 3σ event (properly, a 3s event) and significantly fewer than 300 samples, or a 4s event and significantly fewer than 15,000 samples, then a normal distribution significantly understates the maximum magnitude of deviations in the sample data.

This test is useful in cases where one faces kurtosis risk – where large deviations matter – and has the benefits that it is very easy to compute and to communicate: non-statisticians can easily grasp that "6σ events don’t happen in normal distributions".

Frequentist tests

Tests of univariate normality include D'Agostino's K-squared test, the Jarque–Bera test, the Anderson–Darling test, the Cramér–von Mises criterion, the Lilliefors test for normality (itself an adaptation of the Kolmogorov–Smirnov test), the Shapiro–Wilk test, the Pearson's chi-squared test, and the Shapiro–Francia test. Some published works recommend the Jarque–Bera test.[1][2]

Historically, the third and fourth standardized moments (skewness and kurtosis) were some of the earliest tests for normality. Mardia's multivariate skewness and kurtosis tests generalize the moment tests to the multivariate case.[3] Other early test statistics include the ratio of the mean absolute deviation to the standard deviation and of the range to the standard deviation.[4]

More recent tests of normality include the energy test[5] (Szekely and Rizzo) and the tests based on the empirical characteristic function (ecf) (e.g. Epps and Pulley[6], Henze–Zirkler[7], BHEP tests[8]). The energy and the ecf tests are powerful tests that apply for testing univariate or multivariate normality and are statistically consistent against general alternatives.

Bayesian tests

Kullback–Leibler distances between the whole posterior distributions of the slope and variance do not indicate non-normality. However, the ratio of expectations of these posteriors and the expectation of the ratios give similar results to the Shapiro–Wilk statistic except for very small samples, when non-informative priors are used.[9]

Spiegelhalter suggests using Bayes factors to compare normality with a different class of distributional alternatives.[10] This approach has been extended by Farrell and Rogers-Stewart.[11]

Applications

One application of normality tests is to the residuals from a linear regression model. If they are not normally distributed, the residuals should not be used in Z tests or in any other tests derived from the normal distribution, such as t tests, F tests and chi-squared tests. If the residuals are not normally distributed, then the dependent variable or at least one explanatory variable may have the wrong functional form, or important variables may be missing, etc. Correcting one or more of these systematic errors may produce residuals that are normally distributed.

Notes

  1. ^ Judge, George G.; Griffiths W. E. ; Hill, R. Carter; Lütkepohl, Helmut; Lee, T. (1988) Introduction to the Theory and Practice of Econometrics, Second Edition, 890–892, Wiley. ISBN 0471082775
  2. ^ Gujarati, Damodar N. (2002) Basic Econometrics, Fourth Edition, 147–148, McGraw Hill. ISBN 0071230173
  3. ^ Mardia, K. V. (1970). Measures of multivariate skewness and kurtosis with applications. Biometrika 57, 519–530.
  4. ^ Filliben, J. J. (February 1975). "The Probability Plot Correlation Coefficient Test for Normality". Technometrics (American Society for Quality) 17 (1): 111–117. doi:10.2307/1268008. JSTOR 1268008. 
  5. ^ Szekely, G. J. and Rizzo, M. L (2005) A new test for multivariate normality, Journal of Multivariate Analysis 93, 58–80.
  6. ^ Epps, T. W., and Pulley, L. B. (1983). A test for normality based on the empirical characteristic function. Biometrika 70, 723–726.
  7. ^ Henze, N., and Zirkler, B. (1990). A class of invariant and consistent tests for multivariate normality. Communications in Statistics: Theory and Methods 19, 3595–3617.
  8. ^ Henze, N., and Wagner, T. (1997). A new approach to the BHEP tests for multivariate normality. Journal of Multivariate Analysis 62, 1–23.
  9. ^ Young K. D. S. (1993), "Bayesian diagnostics for checking assumptions of normality". Journal of Statistical Computation and Simulation, 47 (3–4),167–180
  10. ^ Spiegelhalter, D.J. (1980). An omnibus test for normality for small samples. Biometrika, 67, 493–496. doi:10.1093/biomet/67.2.493
  11. ^ Farrell, P.J., Rogers-Stewart, K. (2006) "Comprehensive study of tests for normality and symmetry: extending the Spiegelhalter test". Journal of Statistical Computation and Simulation, 76(9), 803 – 816. doi:10.1080/10629360500109023

References

  • Ralph B. D'Agostino (1986). "Tests for the Normal Distribution". In D'Agostino, R.B. and Stephens, M.A.. Goodness-of-Fit Techniques. New York: Marcel Dekker. ISBN 0-8247-7487-6. 
  • Henry C. Thode, Jr. (2002). Testing for Normality. New York: Marcel Dekker, Inc.. pp. 479. ISBN 0-8247-9613-6. 

External links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Test de Shapiro-Wilk — Pour les articles homonymes, voir Shapiro et Wilk. En statistiques, le test de Shapiro–Wilk teste l hypothèse nulle selon laquelle un échantillon x1, ..., xn est issu d une population normalement distribuée. Il a été publié en 1965 par Samuel… …   Wikipédia en Français

  • Test de Shapiro–Wilk — En estadística, el Test de Shapiro–Wilk, se usa para contrastar la normalidad de un conjunto de datos. Se plantea como hipótesis nula que una muestra x1, ..., xn proviene de una población normalmente distribuida. Fue publicado en 1965 por Samuel… …   Wikipedia Español

  • Test de Jarque-Bera — Le test de Jarque Bera cherche à déterminer si des données suivent une loi normale. Sommaire 1 Présentation 2 Approche plus formelle 3 Références 4 …   Wikipédia en Français

  • Test de jarque bera — Le test de Jarque Bera cherche à déterminer si des données suivent une loi normale. Sommaire 1 Présentation 2 Approche plus formelle 3 Références 4 …   Wikipédia en Français

  • Test de Jarque Bera — Le test de Jarque Bera cherche à déterminer si des données suivent une loi normale. Sommaire 1 Présentation 2 Approche plus formelle 3 Références 4 Log …   Wikipédia en Français

  • Test de normalité — En statistiques, les tests de normalité permettent de vérifier si des données réelles suivent une loi normale ou non. Les tests de normalité sont des cas particuliers des tests d adéquation (ou tests d ajustement, tests permettant de comparer des …   Wikipédia en Français

  • Test de la sueur — Mucoviscidose Mucoviscidose Autre nom Fibrose kystique du pancréas Référence MIM …   Wikipédia en Français

  • Student's t-test — A t test is any statistical hypothesis test in which the test statistic follows a Student s t distribution if the null hypothesis is supported. It is most commonly applied when the test statistic would follow a normal distribution if the value of …   Wikipedia

  • Shapiro-Wilk test — In statistics, the Shapiro Wilk test tests the null hypothesis that a sample x 1, ..., x n came from a normally distributed population. It was published in 1965 by Samuel Shapiro and Martin Wilk.The test statistic is:W = {left(sum {i=1}^n a i x… …   Wikipedia

  • Shapiro-Wilk-Test — Der Shapiro Wilk Test ist ein statistischer Signifikanztest, der die Hypothese überprüft, dass die zugrunde liegende Grundgesamtheit einer Stichprobe normalverteilt ist. Die Nullhypothese H0 nimmt an, dass eine Normalverteilung der… …   Deutsch Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”