Partial correlation

Partial correlation

In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed.

Contents

Formal definition

Formally, the partial correlation between X and Y given a set of n controlling variables Z = {Z1, Z2, …, Zn}, written ρXY·Z, is the correlation between the residuals RX and RY resulting from the linear regression of X with Z and of Y with Z, respectively. In fact, the first-order partial correlation is nothing else than a difference between a correlation and the product of the removable correlations divided by the product of the coefficients of alienation of the removable correlations. The coefficient of alienation, and its relation with joint variance through correlation are available in Guilford (1973, pp. 344–345).

Computation

Using linear regression

A simple way to compute the partial correlation for some data is to solve the two associated linear regression problems, get the residuals, and calculate the correlation between the residuals. If we write xi, yi and zi to denote i.i.d. samples of some joint probability distribution over X, Y and Z, solving the linear regression problem amounts to finding n-dimension vectors

\mathbf{w}_X^* = \arg\min_{\mathbf{w}} \left\{ \sum_{i=1}^N  (x_i - \langle\mathbf{w}, \mathbf{z}_i \rangle)^2 \right\}
\mathbf{w}_Y^* = \arg\min_{\mathbf{w}} \left\{ \sum_{i=1}^N  (y_i - \langle\mathbf{w}, \mathbf{z}_i \rangle)^2 \right\}

with N being the number of samples and \langle\mathbf{v},\mathbf{w} \rangle the scalar product between the vectors v and w. Note that in some implementations the regression includes a constant term, so the matrix \mathbf{z} would have an additional column of ones.

The residuals are then

r_{X,i} = x_i - \langle\mathbf{w}_X^*,\mathbf{z}_i \rangle
r_{Y,i} = y_i - \langle\mathbf{w}_Y^*,\mathbf{z}_i \rangle

and the sample partial correlation is

\hat{\rho}_{XY\cdot\mathbf{Z}}=\frac{N\sum_{i=1}^N r_{X,i}r_{Y,i}-\sum_{i=1}^N r_{X,i}\sum_{i=1}^N r_{Y,i}}
{\sqrt{N\sum_{i=1}^N r_{X,i}^2-\left(\sum_{i=1}^N r_{X,i}\right)^2}~\sqrt{N\sum_{i=1}^N r_{Y,i}^2-\left(\sum_{i=1}^N r_{Y,i}\right)^2}}.

Using recursive formula

It can be computationally expensive to solve the linear regression problems. Actually, the nth-order partial correlation (i.e., with |Z| = n) can be easily computed from three (n - 1)th-order partial correlations. The zeroth-order partial correlation ρXY·Ø is defined to be the regular correlation coefficient ρXY.

It holds, for any Z_0 \in \mathbf{Z}:

\rho_{XY\cdot \mathbf{Z} } =
        \frac{\rho_{XY\cdot\mathbf{Z}\setminus\{Z_0\}} - \rho_{XZ_0\cdot\mathbf{Z}\setminus\{Z_0\}}\rho_{Z_0Y\cdot\mathbf{Z}\setminus\{Z_0\}}}
             {\sqrt{1-\rho_{XZ_0\cdot\mathbf{Z}\setminus\{Z_0\}}^2} \sqrt{1-\rho_{Z_0Y\cdot\mathbf{Z}\setminus\{Z_0\}}^2}}.

Naïvely implementing this computation as a recursive algorithm yields an exponential time complexity. However, this computation has the overlapping subproblems property, such that using dynamic programming or simply caching the results of the recursive calls yields a complexity of \mathcal{O}(n^3).

Note in the case where Z is a single variable, this reduces to:

\rho_{XY\cdot Z } =
        \frac{\rho_{XY} - \rho_{XZ}\rho_{ZY}}
             {\sqrt{1-\rho_{XZ}^2} \sqrt{1-\rho_{ZY}^2}}.

Using matrix inversion

In \mathcal{O}(n^3) time, another approach allows all partial correlations to be computed between any two variables Xi and Xj of a set V of cardinality n, given all others, i.e., \mathbf{V} \setminus \{X_i,X_j\}, if the correlation matrix (or alternatively covariance matrix) Ω = (ωij), where ωij = ρXiXj, is invertible[citation needed] . If we define P = Ω−1, we have:

\rho_{X_iX_j\cdot \mathbf{V} \setminus \{X_i,X_j\}} = -\frac{p_{ij}}{\sqrt{p_{ii}p_{jj}}}.

Interpretation

Geometrical interpretation of partial correlation

Geometrical

Let three variables X, Y, Z [where x is the Independent Variable (IV), y is the Dependent Variable (DV), and Z is the "control" or "extra variable"] be chosen from a joint probability distribution over n variables V. Further let vi, 1 ≤ iN, be N n-dimensional i.i.d. samples taken from the joint probability distribution over V. We then consider the N-dimensional vectors x (formed by the successive values of X over the samples), y (formed by the values of Y) and z (formed by the values of Z).

It can be shown that the residuals RX coming from the linear regression of X using Z, if also considered as an N-dimensional vector rX, have a zero scalar product with the vector z generated by Z. This means that the residuals vector lives on a hyperplane Sz that is perpendicular to z.

The same also applies to the residuals RY generating a vector rY. The desired partial correlation is then the cosine of the angle φ between the projections rX and rY of x and y, respectively, onto the hyperplane perpendicular to z.[1]

As conditional independence test

With the assumption that all involved variables are multivariate Gaussian, the partial correlation ρXY·Z is zero if and only if X is conditionally independent from Y given Z.[2] This property does not hold in the general case.

To test if a sample partial correlation \hat{\rho}_{XY\cdot\mathbf{Z}} vanishes, Fisher's z-transform of the partial correlation can be used:

z(\hat{\rho}_{XY\cdot\mathbf{Z}}) = \frac{1}{2} \ln\left(\frac{1+\hat{\rho}_{XY\cdot\mathbf{Z}}}{1-\hat{\rho}_{XY\cdot\mathbf{Z}}}\right).

The null hypothesis is H_0: \hat{\rho}_{XY\cdot\mathbf{Z}} = 0, to be tested against the two-tail alternative H_A: \hat{\rho}_{XY\cdot\mathbf{Z}} \neq 0. We reject H0 with significance level α if:

\sqrt{N - |\mathbf{Z}| - 3}\cdot |z(\hat{\rho}_{XY\cdot\mathbf{Z}})| > \Phi^{-1}(1-\alpha/2),

where Φ(·) is the cumulative distribution function of a Gaussian distribution with zero mean and unit standard deviation, and N is the sample size. Note that this z-transform is approximate and that the actual distribution of the sample (partial) correlation coefficient is not straightforward. However, an exact t-test based on a combination of the partial regression coefficient, the partial correlation coefficient and the partial variances is available.[3]

The distribution of the sample partial correlation was described by Fisher.[4]

Semipartial correlation (part correlation)

The semipartial (or part) correlation statistic is similar to the partial correlation statistic. Both measure variance after certain factors are controlled for, but to calculate the semipartial correlation one holds the third variable constant for either X or Y, whereas for partial correlations one holds the third variable constant for both. The semipartial correlation measures unique and joint variance while the partial correlation measures unique variance. The semipartial (or part) correlation can be viewed as more practically relevant "because it is scaled to (i.e., relative to) the total variability in the dependent (response) variable." [5] Conversely, it is less theoretically useful because it is less precise about the unique contribution of the independent variable. Although it may seem paradoxical, the semipartial correlation of X'' with Y is always less than the partial correlation of X with Y.

Use in time series analysis

In time series analysis, the partial autocorrelation function (sometimes "partial correlation function") of a time series is defined, for lag h, as

\phi(h)= \rho_{X_0X_h\cdot \{X_1,\dots,X_{h-1} \}}.

See also

External links

References

  1. ^ Rummel, R. J. (1976). "Understanding Correlation". http://www.hawaii.edu/powerkills/UC.HTM. 
  2. ^ Baba, Kunihiro; Ritei Shibata & Masaaki Sibuya (2004). "Partial correlation and conditional correlation as measures of conditional independence". Australian and New Zealand Journal of Statistics 46 (4): 657–664. doi:10.1111/j.1467-842X.2004.00360.x. 
  3. ^ Kendall MG, Stuart A. (1973) The Advanced Theory of Statistics, Volume 2 (3rd Edition), ISBN 0-85264-215-6, Section 27.22
  4. ^ Fisher, R.A. (1924). "The distribution of the partial correlation coefficient". Metron 3 (3–4): 329–332. http://digital.library.adelaide.edu.au/dspace/handle/2440/15182. 
  5. ^ StatSoft, Inc. (2010). "Semi-Partial (or Part) Correlation", Electronic Statistics Textbook. Tulsa, OK: StatSoft, accessed January 15, 2011.

Other

  • Guilford J. P., Fruchter B. (1973). Fundamental statistics in psychology and education. Tokyo: MacGraw-Hill Kogakusha, LTD.. 

Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • partial correlation — dalinė koreliacija statusas T sritis fizika atitikmenys: angl. partial correlation vok. partielle Korrelation, f rus. частная корреляция, f pranc. corrélation partielle, f …   Fizikos terminų žodynas

  • partial correlation — noun a correlation between two variables when the effects of one or more related variables are removed • Topics: ↑statistics • Hypernyms: ↑correlation, ↑correlational statistics • Hyponyms: ↑first order correlation * * * …   Useful english dictionary

  • Correlation and dependence — This article is about correlation and dependence in statistical data. For other uses, see correlation (disambiguation). In statistics, dependence refers to any statistical relationship between two random variables or two sets of data. Correlation …   Wikipedia

  • Partial autocorrelation function — In time series analysis, the partial autocorrelation function (PACF) or PARtial autoCORrelation (PARCOR) plays an important role in data analyses aimed at identifying the extent of the lag in an autoregressive model. The use of this function was… …   Wikipedia

  • Partial regression plot — In applied statistics, a partial regression plot attempts to show the effect of adding an additional variable to the model (given that one or more indpendent variables are already in the model). Partial regression plots are also referred to as… …   Wikipedia

  • Correlation — In probability theory and statistics, correlation, (often measured as a correlation coefficient), indicates the strength and direction of a linear relationship between two random variables. In general statistical usage, correlation or co relation …   Wikipedia

  • Corrélation partielle (statistiques) — Sommaire 1 Formule 2 Démonstration géométrique 3 Domaines d application 4 Références Formule Le coef …   Wikipédia en Français

  • corrélation partielle — dalinė koreliacija statusas T sritis fizika atitikmenys: angl. partial correlation vok. partielle Korrelation, f rus. частная корреляция, f pranc. corrélation partielle, f …   Fizikos terminų žodynas

  • Partial androgen insensitivity syndrome — Classification and external resources AIS results when the function of the androgen receptor (AR) is impaired. The AR protein (pictured) mediates the effects of androgens in the human body. ICD 10 …   Wikipedia

  • Correlation attack — In cryptography, correlation attacks are a class of known plaintext attacks for breaking stream ciphers whose keystream is generated by combining the output of several linear feedback shift registers (called LFSRs for the rest of this article)… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”