Cauchy distribution

Cauchy distribution
Cauchy–Lorentz
Probability density function
Probability density function for the Cauchy distribution
The purple curve is the standard Cauchy distribution
Cumulative distribution function
Cumulative distribution function for the Cauchy distribution
parameters: x_0\! location (real)
\displaystyle \gamma > 0\! scale (real)
support: \displaystyle x \in (-\infty, +\infty)\!
pdf: \frac{1}{\pi\gamma\,\left[1 + \left(\frac{x-x_0}{\gamma}\right)^2\right]}\!
cdf: \frac{1}{\pi} \arctan\left(\frac{x-x_0}{\gamma}\right)+\frac{1}{2}\!
mean: does not exist
median: x_0\!
mode: x_0\!
variance: does not exist
skewness: does not exist
ex.kurtosis: does not exist
entropy: \log(\gamma)\,+\,\log(4\,\pi)\!
mgf: does not exist
cf: \displaystyle \exp(x_0\,i\,t-\gamma\,|t|)\!

The Cauchy–Lorentz distribution, named after Augustin Cauchy and Hendrik Lorentz, is a continuous probability distribution. As a probability distribution, it is known as the Cauchy distribution, while among physicists, it is known as the Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution.

Its importance in physics is the result of its being the solution to the differential equation describing forced resonance.[1] In mathematics, it is closely related to the Poisson kernel, which is the fundamental solution for the Laplace equation in the upper half-plane. In spectroscopy, it is the description of the shape of spectral lines which are subject to homogeneous broadening in which all atoms interact in the same way with the frequency range contained in the line shape. Many mechanisms cause homogeneous broadening, most notably collision broadening, and Chantler–Alda radiation.[2] In its standard form, it is the maximum entropy probability distribution for a random variate X for which E(\ln(1+X^2))=\ln(4) \, \!.[3]

Contents

Characterization

Probability density function

The Cauchy distribution has the probability density function

f(x; x_0,\gamma) = \frac{1}{\pi\gamma \left[1 + \left(\frac{x - x_0}{\gamma}\right)^2\right]}
 = { 1 \over \pi } \left[ { \gamma \over (x - x_0)^2 + \gamma^2  } \right],

where  x_0 \, \! is the location parameter, specifying the location of the peak of the distribution, and  \gamma \, \! is the scale parameter which specifies the half-width at half-maximum (HWHM).  \gamma \, \! is also equal to half the interquartile range and is sometimes called the probable error. Augustin-Louis Cauchy exploited such a density function in 1827 with an infinitesimal scale parameter, defining what would now be called a Dirac delta function.

The amplitude of the above Lorentzian function is given by

\text{Amplitude (or  height)} = \frac{1}{\pi\gamma}.

The special case when  x_0 = 0 \, \! and  \gamma = 1 \, \! is called the standard Cauchy distribution with the probability density function

 f(x; 0,1) = \frac{1}{\pi (1 + x^2)}. \!

In physics, a three-parameter Lorentzian function is often used:

f(x; x_0,\gamma,I) = \frac{I}{\left[1 + \left(\frac{x-x_0}{\gamma}\right)^2\right]}
 = { I }\left[ { \gamma^2 \over (x - x_0)^2 + \gamma^2  } \right],

where  I \, \! is the height of the peak.

Cumulative distribution function

The cumulative distribution function is:

F(x; x_0,\gamma)=\frac{1}{\pi} \arctan\left(\frac{x-x_0}{\gamma}\right)+\frac{1}{2}

and the quantile function (inverse cdf) of the Cauchy distribution is

Q(p; x_0,\gamma) = x_0 + \gamma\,\tan\left[\pi\left(p-\tfrac{1}{2}\right)\right].

It follows that the first and third quartiles are  \left(x_0 - \gamma, x_0 + \gamma\right) \, \!, and hence the interquartile range is  2\gamma \, \!.

The derivative of the quantile function, the quantile density function, for the Cauchy distribution is:

Q'(p; \gamma) = \gamma\,\pi\,{\sec}^2\left[\pi\left(p-\tfrac{1}{2}\right)\right].\!

The differential entropy of a distribution can be defined in terms of its quantile density,[4] specifically

h_e(\text{Cauchy}(\gamma)) = \int_0^1 \log\,(Q'(p; \gamma))\,\mathrm dp = \log(\gamma)\,+\,\log(4\,\pi).\!

Properties

The Cauchy distribution is an example of a distribution which has no mean, variance or higher moments defined. Its mode and median are well defined and are both equal to x0.

When  U \, \! and  V \, \! are two independent normally distributed random variables with expected value  0 \, \! and variance  1 \, \!, then the ratio  U/V \, \! has the standard Cauchy distribution.

If  X_1, \cdots, X_n \, \! are independent and identically distributed random variables, each with a standard Cauchy distribution, then the sample mean  \left(X_1 + \cdots + X_n\right) / n \, \! has the same standard Cauchy distribution (the sample median, which is not affected by extreme values, can be used as a measure of central tendency). To see that this is true, compute the characteristic function of the sample mean:

\phi_{\overline{X}}(t) = \mathrm{E}\left(e^{i\,\overline{X}\,t}\right) \,\!

where \overline{X} is the sample mean. This example serves to show that the hypothesis of finite variance in the central limit theorem cannot be dropped. It is also an example of a more generalized version of the central limit theorem that is characteristic of all stable distributions, of which the Cauchy distribution is a special case.

The Cauchy distribution is an infinitely divisible probability distribution. It is also a strictly stable distribution.[5]

The standard Cauchy distribution coincides with the Student's t-distribution with one degree of freedom.

Like all stable distributions, the location-scale family to which the Cauchy distribution belongs is closed under linear transformations with real coefficients. In addition, the Cauchy distribution is the only univariate distribution which is closed under linear fractional transformations with real coefficients.[citation needed] In this connection, see also McCullagh's parametrization of the Cauchy distributions.

Characteristic function

Let  X \, \! denote a Cauchy distributed random variable. The characteristic function of the Cauchy distribution is given by

\phi_X(t; x_0,\gamma) = \mathrm{E}(e^{i\,X\,t}) =\int_{-\infty}^\infty
f(x;x_{0},\gamma)e^{i\,x\,t}\,dx =  e^{i x_{0}t - \gamma\left| t \right|}.

which is just the Fourier transform of the probability density. The original probability density may be expressed in terms of the characteristic function, essentially by using the inverse Fourier transform:

f(x; x_0,\gamma) = \frac{1}{2\pi}\int_{-\infty}^\infty \phi_X(t;x_0,\gamma)e^{-i\,x\,t}\,dt \!

Observe that the characteristic function is not differentiable at the origin: this corresponds to the fact that the Cauchy distribution does not have an expected value.

Explanation of undefined moments

Mean

If a probability distribution has a density function f(x), then the mean is

\int_{-\infty}^\infty x f(x)\,dx. \qquad\qquad (1)\!

The question is now whether this is the same thing as

\int_0^\infty x f(x)\,dx-\int_{-\infty}^0 |{x}| f(x)\,dx.\qquad\qquad (2) \!

If at most one of the two terms in (2) is infinite, then (1) is the same as (2). But in the case of the Cauchy distribution, both the positive and negative terms of (2) are infinite. This means (2) is undefined. Moreover, if (1) is construed as a Lebesgue integral, then (1) is also undefined, because (1) is then defined simply as the difference (2) between positive and negative parts.

However, if (1) is construed as an improper integral rather than a Lebesgue integral, then (2) is undefined, and (1) is not necessarily well-defined. We may take (1) to mean

\lim_{a\to\infty}\int_{-a}^a x f(x)\,dx, \!

and this is its Cauchy principal value, which is zero, but we could also take (1) to mean, for example,

\lim_{a\to\infty}\int_{-2a}^a x f(x)\,dx, \!

which is not zero, as can be seen easily by computing the integral.

Because the integrand is bounded and is not Lebesgue integrable, it is not even Henstock–Kurzweil integrable. Various results in probability theory about expected values, such as the strong law of large numbers, will not work in such cases.

Higher moments

The Cauchy distribution does not have moments of any order. This follows from Hölder's inequality which implies that higher moments diverge if lower moments do. In particular, no second central moment exists, as can be verified by direct computation:

\mathrm{E}(X^2) \propto \int_{-\infty}^{\infty} {x^2 \over 1+x^2}\,dx = \int_{-\infty}^{\infty} dx - \int_{-\infty}^{\infty} {1 \over 1+x^2}\,dx = \int_{-\infty}^{\infty} dx  -\pi = \infty. \!

The variance does not exist because of the divergent mean, which is distinctly different from having an infinite variance.

Estimation of parameters

Because the mean and variance of the Cauchy distribution are not defined, attempts to estimate these parameters will not be successful. For example, if N samples are taken from a Cauchy distribution, one may calculate the sample mean as:

\overline{x}=\frac{1}{N}\sum_{i=1}^N x_i

Although the sample values \,\!x_i will be concentrated about the central value \,\!x_0, the sample mean will become increasingly variable as more samples are taken, because of the increased likelihood of encountering sample points with a large absolute value. In fact, the distribution of the sample mean will be equal to the distribution of the samples themselves; i.e., the sample mean of a large sample is no better (or worse) an estimator of \,\!x_0 than any single observation from the sample. Similarly, calculating the sample variance will result in values that grow larger as more samples are taken.

Therefore, more robust means of estimating the central value \,\!x_0 and the scaling parameter \,\!\gamma are needed. One simple method is to take the median value of the sample as an estimator of \,\!x_0 and half the sample interquartile range as an estimator of \,\!\gamma. Other, more precise and robust methods have been developed [6][7] For example, the truncated mean of the middle 24% of the sample order statistics produces an estimate for \,\!x_0 that is more efficient than using either the sample median or the full sample mean.[8][9] However, because of the fat tails of the Cauchy distribution, the efficiency of the estimator decreases if more than 24% of the sample is used.[8][9]

Maximum likelihood can also be used to estimate the parameters \,\!x_0 and \,\!\gamma. However, this tends to be complicated by the fact that this requires finding the roots of a high degree polynomial, and there can be multiple roots that represent local maxima.[10] Also, while the maximum likelihood estimator is asymptotically efficient, it is relatively inefficient for small samples.[11] The log-likelihood function for the Cauchy distribution for sample size n is:


\hat\ell(\!x_0,\gamma|\,x_1,\dots,x_n) = n \log (\gamma) - \sum_{i=1}^n (\log [(\gamma)^2 + (x_i - \!x_0)^2]) - n \log (\pi)

Maximizing the log likelihood function with respect to \,\!x_0 and \,\!\gamma produces the following system of equations:


 \sum_{i=1}^n (x_i - \!x_0) / (\gamma^2 + [x_i - \!x_0]^2) = 0

 \sum_{i=1}^n \gamma^2 / (\gamma^2 + [x_i - \!x_0]^2) - n/2 = 0

Note that  \sum_{i=1}^n \gamma^2 / (\gamma^2 + [x_i - \!x_0]^2) is a monotone function in \,\!\gamma and that the solution \,\!\gamma must satisfy  \min |x_i-x_0|\le \gamma\le \max |x_i-x_0| . Solving just for \,\!x_0 requires solving a polynomial of degree 2n − 1,[10] and solving just for \,\!\gamma requires solving a polynomial of degree n (first for \,\!\gamma^2, then \,\!x_0). Therefore, whether solving for one parameter or for both paramters simultaneously, a numerical solution on a computer is typically required. The benefit of maximum likelihood estimation is asymptotic efficiency; estimating \,\!x_0 using the sample median is only about 81% as asymptotically efficient as estimating \,\!x_0 by maximum likelihood.[9][12] The truncated sample mean using the middle 24% order statistics is about 88% as asymptotically efficient an estimator of \,\!x_0 as the maximum likelihood estimate.[9] When Newton's method is used to find the solution for the maximum likelihood estimate, the middle 24% order statistics can be used as an initial solution for \,\!x_0.

Circular Cauchy distribution

If X is Cauchy distributed with median μ and scale parameter γ, then the complex variable

Z = (X - i)/(X+i)\,

has unit modulus and is distributed on the unit circle with density:

P_{cc}(\theta;\zeta)=\frac{1 - |\zeta|^2}{2\pi |e^{i\theta} - \zeta|^2}

with respect to the angular variable θ = arg(z),[citation needed] where

\zeta = \frac{\psi - i}{\psi + i}

and ψ expresses the two parameters of the associated linear Cauchy distribution for x as a complex number:

\psi=\mu+i\gamma\,

The distribution Pcc(θ;ζ) is called the circular Cauchy distribution [13][14] (also the complex Cauchy distribution)[citation needed] with parameter ζ. The circular Cauchy distribution is related to the wrapped Cauchy distribution. If Pwc(θ;ψ) is a wrapped Cauchy distribution with the parameter ψ = μ + iγ representing the parameters of the corresponding "unwrapped" Cauchy distribution in the variable y where \theta=(y\,\,\mathrm{mod}\,\,2\pi), then

P_{wc}(\theta;\psi)=P_{cc}(\theta,e^{i\psi})\,

See also McCullagh's parametrization of the Cauchy distributions and Poisson kernel for related concepts.

The circular Cauchy distribution expressed in complex form has finite moments of all orders

 \operatorname{E}(Z^r) = \zeta^r,  \quad \operatorname{E}(\bar Z^r) = \bar\zeta^r

for integer r\ge 1. For | ϕ | < 1, the transformation

U(z, \phi) =  (z - \phi)/(1 - \bar \phi z)

is holomorphic on the unit disk, and the transformed variable U(Z,ϕ) is distributed as complex Cauchy with parameter U(ζ,ϕ).

Given a sample z_1,\ldots, z_n of size n > 2, the maximum-likelihood equation

n^{-1} U(z, \hat\zeta) = n^{-1} \sum U(z_j, \hat\zeta) = 0

can be solved by a simple fixed-point iteration:

\zeta^{(r+1)} = U(n^{-1} U(z, \zeta^{(r)}), - \zeta^{(r)})\,

starting with ζ(0) = 0. The sequence of likelihood values is non-decreasing, and the solution is unique for samples containing at least three distinct values. [15]

The maximum-likelihood estimate for the median (\hat\mu) and scale parameter (\hat\gamma) of a real Cauchy sample is obtained by the inverse transformation:

\hat\mu \pm i\hat\gamma = i(1+\hat\zeta)/(1-\hat\zeta).

For n ≤ 4, closed-form expressions are known for \hat\zeta.[10] The density of the maximum-likelihood estimator at t in the unit disk is necessarily of the form:

\frac{p_n(\chi(t, \zeta))}{4\pi(1 - |t|^2)^2} ,

where

\chi(t, \zeta) = \frac{ |t - \zeta|^2}{4(1 - |t|^2)(1 - |\zeta|^2)}.

Formulae for p3 and p4 are available.[16]

Multivariate Cauchy distribution

A random vector X = (X1, ..., Xk)′ is said to have the multivariate Cauchy distribution if every linear combination of its components Y = a1X1 + ... + akXk has a Cauchy distribution. That is, for any constant vector aRk, the random variable Y = a′X should have a univariate Cauchy distribution.[17] The characteristic function of a multivariate Cauchy distribution is given by:

\phi_X(t) =  e^{i\,x_0\,(t)-\gamma\,(t)}, \!

where \,x_0\,(t) and \gamma\,(t) are real functions with \,x_0\,(t) a homogeneous function of degree one and \gamma\,(t) a positive homogeneous function of degree one.[17] More formally:[17]

\,x_0\,(at) = a\,x_0\,(t) and \gamma\,(at) = |a|\gamma\,(t) for all t.

An example of a bivariate Cauchy distribution can be given by:[18]


f(x, y; x_0,y_0,\gamma)= { 1 \over \pi } \left[ { \gamma \over ((x - x_0)^2 + (y - y_0)^2 +\gamma^2)^{1.5}  } \right] .

Note that in this example, even though there is no analogue to a covariance matrix, x and y are not statistically independent.[18]

Analogously to the univariate density, the multidimensional Cauchy density also relates to the Multivariate Student distribution. They are equivalent when the degrees of freedom parameter is equal to one. The density of a k dimension Student distribution with one degree of freedom becomes:


f( {\mathbf x} ; {\mathbf\mu},{\mathbf\Sigma}, k)= \frac{\Gamma\left[(1+k)/2\right]}{\Gamma(1/2)\pi^{k/2}\left|{\mathbf\Sigma}\right|^{1/2}\left[1+({\mathbf x}-{\mathbf\mu})^T{\mathbf\Sigma}^{-1}({\mathbf x}-{\mathbf\mu})\right]^{(1+k)/2}} .

Properties and details for this density can be obtained by taking it as a particular case of the Multivariate Student density.

Transformation properties

\frac{aX+b}{cX+d} ~ Cauchy\left(\frac{a\psi+b}{c\psi+d}\right) where a,b,c and d are real numbers.
  • Using the same convention as above, if If X ~ Cauchy(ψ) then:
\frac{X-i}{X+i} ~ CCauchy\left(\frac{\psi-i}{\psi+i}\right)
where "CCauchy" is the circular Cauchy distribution.

Related distributions

Relativistic Breit–Wigner distribution

In nuclear and particle physics, the energy profile of a resonance is described by the relativistic Breit–Wigner distribution, while the Cauchy distribution is the (non-relativistic) Breit–Wigner distribution.[citation needed]

See also

References

  1. ^ http://webphysics.davidson.edu/Projects/AnAntonelli/node5.html Note that the intensity, which follows the Cauchy distribution, is the square of the amplitude.
  2. ^ E. Hecht (1987). Optics (2nd ed.). Addison-Wesley. p. 603. 
  3. ^ Park, Sung Y.; Bera, Anil K. (2009). "Maximum entropy autoregressive conditional heteroskedasticity model". Journal of Econometrics (Elsevier): 219–230. http://www.econ.yorku.ca/cesg/papers/berapark.pdf. Retrieved 2011-06-02. 
  4. ^ Vasicek, Oldrich (1976). "A Test for Normality Based on Sample Entropy". Journal of the Royal Statistical Society, Series B (Methodological) 38 (1): 54–59. 
  5. ^ S.Kotz et al (2006). Encyclopedia of Statistical Sciences (2nd ed.). John Wiley & Sons. p. 778. ISBN 978-0-471-15044-2. 
  6. ^ Cane, Gwenda J. (1974). "Linear Estimation of Parameters of the Cauchy Distribution Based on Sample Quantiles". Journal of the American Statistical Association 69 (345): 243–245. JSTOR 2285535. 
  7. ^ Zhang, Jin (2010). "A Highly Efficient L-estimator for the Location Parameter of the Cauchy Distribution". Computational Statistics 25 (1): 97–105. http://www.springerlink.com/content/3p1430175v4806jq. 
  8. ^ a b Rothenberg, Thomas J.; Fisher, Franklin, M.; Tilanus, C.B. (1966). "A note on estimation from a Cauchy sample". Journal of the American Statistical Association 59 (306): 460–463. 
  9. ^ a b c d Bloch, Daniel (1966). "A note on the estimation of the location parameters of the Cauchy distribution". Journal of the American Statistical Association 61 (316): 852–855. JSTOR 2282794. 
  10. ^ a b c Ferguson, Thomas S. (1978). "Maximum Likelihood Estimates of the Parameters of the Cauchy Distribution for Samples of Size 3 and 4". Journal of the American Statistical Association 73 (361): 211. JSTOR 2286549. 
  11. ^ Cohen Freue, Gabriella V. (2007). "The Pitman estimator of the Cauchy location parameter". Journal of Statistical Planning and Inference 137: 1901. http://faculty.ksu.edu.sa/69424/USEPAP/Coushy%20dist.pdf. 
  12. ^ Barnett, V. D. (1966). "Order Statistics Estimators of the Location of the Cauchy Distribution". Journal of the American Statistical Association 61 (316): 1205. JSTOR 2283210. 
  13. ^ McCullagh, P., "Conditional inference and Cauchy models", Biometrika, volume 79 (1992), pages 247–259. PDF from McCullagh's homepage.
  14. ^ K.V. Mardia (1972). Statistics of Directional Data. Academic Press. [page needed]
  15. ^ J. Copas (1975). "On the unimodality of the likelihood function for the Cauchy distribution". Biometrika 62: 701–704. 
  16. ^ P. McCullagh (1996). "Mobius transformation and Cauchy parameter estimation.". Annals of Statistics 24: 786–808. JSTOR 2242674. 
  17. ^ a b c Ferguson, Thomas S. (1962). "A Representation of the Symmetric Bivariate Cauchy Distribution". Journal of the American Statistical Association: 1256. JSTOR 2237984. 
  18. ^ a b Molenberghs, Geert; Lesaffre, Emmanuel (1997). "Non-linear Integral Equations to Approximate Bivariate Densities with Given Marginals and Dependence Function". Statistica Sinica 7: 713–738. http://www3.stat.sinica.edu.tw/statistica/oldpdf/A7n310.pdf. 

External links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Cauchy distribution — noun A continuous probability distribution such that its probability density function is …   Wiktionary

  • Distribution Zeta — Zéta Densité de probabilité / Fonction de masse Fonction de répartition …   Wikipédia en Français

  • Distribution de Lévy — Densité de probabilité / Fonction de masse Fonction de répartition …   Wikipédia en Français

  • Cauchy-Riemannsche partielle Differentialgleichungen — Die Cauchy Riemannschen partiellen Differentialgleichungen (nach Augustin Louis Cauchy und Bernhard Riemann) sind ein Begriff aus der Funktionentheorie und ein Kriterium für komplexe Differenzierbarkeit. Die Gleichungen wurden das erste Mal 1814… …   Deutsch Wikipedia

  • Distribution (statistique) — Loi de probabilité Une loi de probabilité ou distribution de probabilité a commencé par décrire les répartitions typiques des fréquences d apparition des résultats d un phénomène aléatoire. Dans le dernier quart du XXe siècle, on a largement …   Wikipédia en Français

  • Distribution de probabilité — Loi de probabilité Une loi de probabilité ou distribution de probabilité a commencé par décrire les répartitions typiques des fréquences d apparition des résultats d un phénomène aléatoire. Dans le dernier quart du XXe siècle, on a largement …   Wikipédia en Français

  • Distribution de probabilités — Loi de probabilité Une loi de probabilité ou distribution de probabilité a commencé par décrire les répartitions typiques des fréquences d apparition des résultats d un phénomène aléatoire. Dans le dernier quart du XXe siècle, on a largement …   Wikipédia en Français

  • Cauchy principal value — In mathematics, the Cauchy principal value of certain improper integrals, named after Augustin Louis Cauchy, is defined as either* the finite number::lim {varepsilon ightarrow 0+} left [int a^{b varepsilon} f(x),dx+int {b+varepsilon}^c f(x),dx… …   Wikipedia

  • Cauchy product — In mathematics, the Cauchy product, named after Augustin Louis Cauchy, of two sequences , , is the discrete convolution of the two sequences, the sequence whose general term is given by In other words, it is the sequence whose associated formal… …   Wikipedia

  • Cauchy'scher Hauptwert — Dieser Artikel behandelt den Hauptwert in der Integralrechnung. Für die Bedeutung des Hauptwertes bei komplexen Logarithmen, siehe Logarithmus. Als cauchyschen Hauptwert (nach A. L. Cauchy) bezeichnet man im mathematischen Teilgebiet der Analysis …   Deutsch Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”