Copula (statistics)

Copula (statistics)

In statistics, a copula is used as a general way of formulating a multivariate distribution in such a way that various general types of dependence can be represented. Other ways of formulating multivariate distributions include conceptually-based approaches in which the real-world meaning of the variables is used to imply what types of relationships might occur. In contrast, the approach via copulas might be considered as being more raw, but it does allow much more general types of dependencies to be included than would usually be invoked by a conceptual approach.

The approach to formulating a multivariate distribution using a copula is based on the idea that a simple transformation can be made of each marginal variable in such a way that each transformed marginal variable has a uniform distribution. When applied in a practical context, such transformations might be fitted as an initial step for each margin, or the parameters of the transformations might be fitted jointly with those of the copula.

There are many families of copulas which differ in the detail of the dependence they represent. A family will typically have several parameters which relate to the strength and form of the dependence. However, it is possible to specify a dependency structure and for a copula to emerge using a conditioning technique such as the D distribution.

Definition

A copula is a multivariate joint distribution defined on the "n"-dimensional unit cube [0, 1] "n" such that every marginal distribution is uniform on the interval [0, 1] .

Specifically, C: [0,1] ^n o [0,1] is an "n"-dimensional copula (briefly, "n"-copula) if:

: Cleft(mathbf u ight)=0 whenever mathbf uin [0,1] ^n has at least one component equal to 0;

: Cleft(mathbf u ight)=u_i whenever mathbf uin [0,1] ^n has all the components equal to 1 except the i-th one, which is equal to u_i;

: Cleft(mathbf u ight) is n-increasing, i.e., for each B= imes_{i=1}^{n} [x_i,y_i] subseteq [0,1] ^n;

: V_{C}left( B ight):=sum_{mathbf zin imes_{i=1}^{n}{x_i,y_i (-1)^{N(mathbf z)} C(mathbf z)ge 0;

where the N(mathbf z)=operatorname{card}{kmid z_k=x_k}. V_{C}left( B ight) is the so called C-volume of B.

klar's theorem

The theorem proposed by Sklar Sklar (1959)] underlies most applications of the copula. Sklar's theorem states that given a joint distribution function "H" for "p" variables, and respective marginal distribution functions, there exists a copula "C" such that the copula binds the margins to give the joint distribution.

For the bivariate case, Sklar's theorem can be stated as follows. For any bivariate distribution function "H"("x", "y"), let "F"("x") = "H"("x", (−∞,∞)) and "G"("y") = "H"((−∞,∞), "y") be the univariate marginal probability distribution functions. Then there exists a copula "C" such that

:H(x,y)=C(F(x),G(y)),

(where we have identified the distribution "C" with its cumulative distribution function). Moreover, if marginal distributions, say, "F"("x") and "G"("y"), are continuous, the copula function "C" is unique. Otherwise, the copula "C" is unique on the range of values of the marginal distributions.

Fréchet-Hoeffding copula boundaries

Minimum copula: This is the lower bound for all copulas. In the bivariate case only, it represents perfect negative dependence between variates.

: W(u,v) = max(0,u+v-1).,

For n-variate copulas, the lower bound is given by: W(u_1,ldots,u_n) := maxleft{1-n+sumlimits_{i=1}^n {u_i} , 0 ight} leq C(u_1,ldots,u_n).

Maximum copula: This is the upper bound for all copulas. It represents perfect positive dependence between variates:

: M(u,v) = min(u,v).,

For n-variate copulas, the upper bound is given by:C(u_1,ldots,u_n)le min_{j in {1,ldots,n u_j =: M(u_1,ldots,u_n).

Conclusion: For all copulas C(u,v),: W(u,v) le C(u,v) le M(u,v).

In the multivariate case, the corresponding inequality is: W(u_1,ldots,u_n) le C(u_1,ldots,u_n) le M(u_1,ldots,u_n).

Gaussian copula

One example of a copula often used for modelling in finance is the Gaussian Copula, which is constructed from the bivariate normal distribution via Sklar's theorem. For "X" and "Y" distributed as standard bivariate normal with correlation "ρ" the Gaussian copula function is

: C_ ho(U,V) = Phi_{ ho} left(Phi^{-1}(U), Phi^{-1}(V) ight)

where U = U(X) and V = V(Y) are cumulative probability distributions whose result is in (0,1) and Φ denotes the cumulative normal density. These are "percentile to percentile transfomations", where X and Y are transformed into

: A = (Phi^{-1}(U)) and B =(Phi^{-1}(V))

Differentiating C yields to:

: c_ ho(U,V) = frac{Phi_{U,V, ho} (Phi^{-1}(U), Phi^{-1}(V) )}{Phi(Phi^{-1}(U)) Phi(Phi^{-1}(V))}

where:

: Phi_{X,Y, ho}(x,y) = frac{1}{2 pisqrt{1- ho^2 exp left (-frac{1}{2(1- ho^2)} left [{x^2+y^2} -2 ho xy ight ] ight )

is the density function for the standard bivariate gaussian with Pearson's product moment correlation coefficient ho.

Archimedean copulas

Archimedean copulas are an important family of copulas, which have a simple form with properties such as associativity and have a variety of dependence structures. Unlike elliptical copulas (eg. Gaussian), most of the Archimedean copulas have closed-form solutions and are not derived from the multivariate distribution functions using Sklar’s Theorem.

One particularly simple form of a "n"-dimensional copula is

: H(x_1,x_2,dots,x_n) = Psi^{-1}left(sum_{i=1}^nPsi(F_i(x_i)) ight),

where Psi is known as a "generator function". Such copulas are known as "Archimedean". Any generator function which satisfies the properties below is the basis for a valid copula:

:Psi(1) = 0;qquad lim_{x o 0}Psi(x) = infty;qquad Psi'(x) < 0;qquad Psi"(x) > 0.

Product copula: Also called the independent copula, this copula has no dependence between variates. Its density function is unity everywhere.

:Psi(x) = -ln(x); qquad H(x,y) = F(x)G(y).

Where the generator function is indexed by a parameter, a whole family of copulas may be Archimedean. For example:

Clayton copula:

:Psi(x) = x^{ heta} -1;qquad heta le 0; qquad H(x,y) = (F(x)^ heta+G(y)^ heta-1)^{1/ heta}.

For θ = 0 in the Clayton copula, the random variables are statistically independent. The generator function approach can be extended to create multivariate copulas, by simply including more additive terms.

Gumbel copula::Psi(x)= (-ln(x))^alpha ,.

Frank copula::Psi(x)= lnleft( frac{e^{alpha x} -1}{e^{alpha} -1} ight) .

Applications

Copulas are used in the pricing of collateralized debt obligations.Fact|date=October 2008 Dependence modelling with copula functions is widelyused in applications of financial risk assessment and actuarial analysis. Recently they have been successfully applied to the database formulation for the reliability analysis of Highway bridges and to various multivariate simulation studies in Civil, Mechanical and Offshore engineering.Fact|date=October 2008

ee also

*D distribution

References

Notes

General

* David G. Clayton (1978), "A model for association in bivariate life tables and its application in epidemiological studies of familial tendency in chronic disease incidence", "Biometrika" 65, 141-151. [http://links.jstor.org/sici?sici=0006-3444%28197804%2965%3A1%3C141%3AAMFAIB%3E2.0.CO%3B2-Y JSTOR (subscription)]
* Frees, E.W., Valdez, E.A. (1998), "Understanding Relationships Using Copulas", "North American Actuarial Journal" 2, 1-25. [http://www.soa.org/library/journals/north-american-actuarial-journal/1998/january/naaj9801_1.pdf Link to NAAJ copy]
* Roger B. Nelsen (1999), "An Introduction to Copulas". ISBN 0-387-98623-5.
* S. Rachev, C. Menn, F. Fabozzi (2005), "Fat-Tailed and Skewed Asset Return Distributions". ISBN 0-471-71886-6.
* A. Sklar (1959), "Fonctions de répartition à "n" dimensions et leurs marges", "Publications de l'Institut de Statistique de L'Université de Paris" 8, 229-231.
* C. Schölzel, P. Friederichs (2008), "Multivariate non-normally distributed random variables in climate research – introduction to the copula approach". [http://www.meteo.uni-bonn.de/mitarbeiter/CSchoelzel/sources/schoelzel_egu_2008_copulas_talk_01.pdf PDF]
* W.T. Shaw, K.T.A. Lee (2006), "Copula Methods vs Canonical Multivariate Distributions: The Multivariate Student T Distibution with General Degrees of Freedom". [http://www.mth.kcl.ac.uk/~shaww/web_page/papers/MultiStudentc.pdf PDF]
* Srinivas Sriramula, Devdas Menon and A. Meher Prasad (2006), "Multivariate Simulation and Multimodal Dependence Modeling of Vehicle Axle Weights with Copulas", "ASCE Journal of Transportation Engineering" 132 (12), 945&ndash;955. (doi 10.1061/(ASCE)0733-947X(2006)132:12(945)) [http://cedb.asce.org/cgi/WWWdisplay.cgi?0613154 ASCE(subscription)]

External links

* [http://mathworld.wolfram.com/SklarsTheorem.html MathWorld Eric W. Weisstein. "Sklar's Theorem." From MathWorld--A Wolfram Web Resource]
* [http://www.flll.jku.at/mediawiki Copula Wiki: community portal for researchers with interest in copulas]
* [http://www.vosesoftware.com/ModelRiskHelp/index.htm#Modeling_correlation/Archimedean_copulas_-_the_Clayton_Frank_and_Gumbel.htm Explanation of Archimedean copulas: The Clayton, Frank and Gumbel]
* [http://www.mathfinance.cn/tags/copula A collection of Copula simulation and estimation codes]

oftware

* [http://www.vosesoftware.com/ModelRiskHelp/index.htm#Help_on_ModelRisk/Copulas/Copulas_in_ModelRisk.htm A tool for modeling copulas in Excel] by Vose Software


Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • Copula (probability theory) — In probability theory and statistics, a copula can be used to describe the dependence between random variables. Copulas derive their name from linguistics. The cumulative distribution function of a random vector can be written in terms of… …   Wikipedia

  • Copula (Mathematik) — Eine Copula (Pl. Copulas oder Copulae) ist eine Funktion, die einen funktionalen Zusammenhang zwischen den Randverteilungsfunktionen verschiedener Zufallsvariablen und ihrer gemeinsamen Wahrscheinlichkeitsverteilung angeben kann. Mit ihrer Hilfe… …   Deutsch Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

  • List of probability topics — This is a list of probability topics, by Wikipedia page. It overlaps with the (alphabetical) list of statistical topics. There are also the list of probabilists and list of statisticians.General aspects*Probability *Randomness, Pseudorandomness,… …   Wikipedia

  • List of mathematics articles (C) — NOTOC C C closed subgroup C minimal theory C normal subgroup C number C semiring C space C symmetry C* algebra C0 semigroup CA group Cabal (set theory) Cabibbo Kobayashi Maskawa matrix Cabinet projection Cable knot Cabri Geometry Cabtaxi number… …   Wikipedia

  • Probability distribution — This article is about probability distribution. For generalized functions in mathematical analysis, see Distribution (mathematics). For other uses, see Distribution (disambiguation). In probability theory, a probability mass, probability density …   Wikipedia

  • List of probability distributions — Many probability distributions are so important in theory or applications that they have been given specific names.Discrete distributionsWith finite support* The Bernoulli distribution, which takes value 1 with probability p and value 0 with… …   Wikipedia

  • Joint probability distribution — In the study of probability, given two random variables X and Y that are defined on the same probability space, the joint distribution for X and Y defines the probability of events defined in terms of both X and Y. In the case of only two random… …   Wikipedia

  • Disintegration theorem — In mathematics, the disintegration theorem is a result in measure theory and probability theory. It rigorously defines the idea of a non trivial restriction of a measure to a measure zero subset of the measure space in question. It is related to… …   Wikipedia

  • Statistical independence — In probability theory, to say that two events are independent, intuitively means that the occurrence of one event makes it neither more nor less probable that the other occurs. For example:* The event of getting a 6 the first time a die is rolled …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”