 Wold's theorem

This article is about the theorem as used in time series analysis. For an abstract mathematical statement, see Wold decomposition.
In statistics, Wold's decomposition or the Wold representation theorem (not to be confused with the Wold theorem that is the discretetime analog of the Wiener–Khinchine theorem) named after Herman Wold, says that every covariancestationary time series Y_{t} can be written as an infinite moving average (MA()) process of its innovation process. Such a formulation is known as a moving average representation for the time series, not to be confused with a simple running mean of data series.
Formally
where:

 is the time series being considered,

 ε_{t} is an uncorrelated sequence which is the innovation process to the process – that is, a white noise process that is input to the linear filter {b_{j}}.

 is the possibly infinite vector of moving average weights (coefficients or parameters)

 is a deterministic component, which is zero in the absence of trends in Y_{t}.
Note that the moving average coefficients have these properties:
 Stable, that is absolutely summable <
 Causal (i.e. there are no terms with j < 0)
 Minimum delay
 Constant (b_{j} independent of t)
 It is conventional to define b_{0} = 1
This theorem can be considered as an existence theorem: any stationary process has this seemingly special representation. Not only is the existence of such a simple linear and exact representation remarkable, but even more so is the special nature of the moving average model. Imagine creating a process that is a moving average but not satisfying these properties 1–4. For example, the coefficients b_{j} could define an acausal and nonminimum delay model. Nevertheless the theorem assures the existence of a causal minimum delay moving average that exactly represents this process. How this all works for the case of causality and the minimum delay property is discussed in Scargle (1981), where an extension of the Wold Decomposition is discussed.
The usefulness of the Wold Theorem is that it allows the dynamic evolution of a variable Y_{t} to be approximated by a linear model. If the innovations ε_{t} are independent, then the linear model is the only possible representation relating the observed value of Y_{t} to its past evolution. However, when ε_{t} is merely an uncorrelated but not independent sequence, then the linear model exists but it is not the only representation of the dynamic dependence of the series. In this latter case, it is possible that the linear model may not be very useful, and there would be a nonlinear model relating the observed value of Y_{t} to its past evolution. However, in practical time series analysis, it often the case that only linear predictors are considered, partly on the grounds of simplicity, in which case the Wold decomposition is directly relevant.
The Wold representation depends on an infinite number of parameters, although in practice they usually decay rapidly. The autoregressive model is an alternative that may have only a few coefficients if the corresponding moving average has many. These two models can be combined into an autoregressivemoving average (ARMA) model, or an autoregressiveintegratedmoving average (ARIMA) model if nonstationarity is involved. See Scargle(1981) and references there.
References
 Anderson, T. W. (1971) The Statistical Analysis of Time Series. Wiley.
 Wold, H. (1954) A Study in the Analysis of Stationary Time Series, Second revised edition, with an Appendix on "Recent Developments in Time Series Analysis" by Peter Whittle. Almqvist and Wiksell Book Co., Uppsala.
 Scargle, J.D. (1981) Studies in astronomical time series analysis. I – Modeling random processes in the time domain,,' 1981, Astrophysical Journal Supplement Series, 45, pp. 1–71.
This statisticsrelated article is a stub. You can help Wikipedia by expanding it. 