# Cumulant

﻿
Cumulant

In probability theory and statistics, the cumulants κn of a probability distribution are a set of quantities that provide an alternative to the moments of the distribution. The moments determine the cumulants in the sense that any two probability distributions whose moments are identical will have identical cumulants as well, and similarly the cumulants determine the moments. In some cases theoretical treatments of problems in terms of cumulants are simpler than those using moments.

Just as for moments, where joint moments are used for collections of random variables, it is possible to define joint cumulants.

## Introduction

The cumulants κn of a random variable X are defined via the cumulant-generating function $g(t) = \sum_{n=1}^\infty \kappa_n \frac{t^n}{n!}\,,$

using the (non-central) moments μ′n of X and the moment-generating function, $\operatorname{E}(e^{tX}) = 1 + \sum_{m=1}^\infty \mu'_m \frac{t^m}{m!}\,,$

with a formal power series logarithm: \begin{align}g(t) &= \log(\operatorname{E}(e^{tX})) = - \sum_{n=1}^\infty \frac{1}{n}\left(1-\operatorname{E}(e^{tX})\right)^n = - \sum_{n=1}^\infty \frac{1}{n}\left(-\sum_{m=1}^\infty \mu'_m \frac{t^m}{m!}\right)^n \\ &= \mu'_1 t + \left(\mu'_2 - {\mu'_1}^2\right) \frac{t^2}{2!} + \left(\mu'_3 - 3\mu'_2\mu'_1 + 2{\mu'_1}^3\right) \frac{t^3}{3!} + \cdots . \end{align}

The cumulants of a distribution are closely related to the distribution's moments. For example, if a random variable X admits an expected value μ = E(X) and a variance σ2 = E((X − μ)2), then these are the first two cumulants: μ = κ1 and σ2 = κ2.

Generally, the cumulants can be extracted from the cumulant-generating function via differentiation (at zero) of g(t). That is, the cumulants appear as the coefficients in the Maclaurin series of g(t): \begin{align} \kappa_1 &= g'(0) = \mu'_1 = \mu, \\ \kappa_2 &= g''(0) = \mu'_2 - {\mu'_1}^2 = \sigma^2, \\ &{} \ \ \vdots \\ \kappa_n &= g^{(n)}(0), \\ &{} \ \ \vdots \end{align}

Note that expectation values are sometimes denoted by angle brackets, e.g., $\mu'_n = \operatorname{E}(X^n)=\langle X^n \rangle \,$

and cumulants can be denoted by angle brackets with the subscript c,[citation needed] e.g., $\kappa_n = \langle X^n\rangle_c. \,$

Some writers prefer to define the cumulant generating function as the natural logarithm of the characteristic function, which is sometimes also called the second characteristic function, $h(t)=\sum_{n=1}^\infty \kappa_n \frac{(it)^n}{n!}=\log(\operatorname{E} (e^{i t X}))=\mu it - \sigma^2 \frac{ t^2}{2} + \cdots.\,$

The advantage of h(t)—in some sense the function g(t) evaluated for (purely) imaginary arguments—is that E(eitX) will be well defined for all real values of t even when E(etX) is not well defined for all real values of t, such as can occur when there is "too much" probability that X has a large magnitude. Although h(t) will be well defined, it nonetheless may mimic g(t) by not having a Maclaurin series beyond (or, rarely, even to) linear order in the argument t. Thus, many cumulants may still not be well defined. Nevertheless, even when h(t) does not have a long Maclaurin series it can be used directly in analyzing and, particularly, adding random variables. Both the Cauchy distribution (also called the Lorentzian) and stable distribution (related to the Lévy distribution) are examples of distributions for which the generating functions do not have power-series expansions.

## Uses in mathematical statistics

Working with cumulants can have an advantage over using moments because for independent variables X and Y, \begin{align} g_{X+Y}(t) & =\log(\operatorname{E}(e^{t(X+Y)})) = \log(\operatorname{E}(e^{tX})\operatorname{E}(e^{tY})) \\ & = \log(\operatorname{E}(e^{tX})) + \log(\operatorname{E}(e^{tY})) = g_X(t) + g_Y(t). \end{align}

so that each cumulant of a sum is the sum of the corresponding cumulants of the addends.

A distribution with given cumulants κn can be approximated through an Edgeworth series.

## Cumulants of some discrete probability distributions

• The constant random variable X = 1. The derivative of the cumulant generating function is g '(t) = 1. The first cumulant is κ1 = g '(0) = 1 and the other cumulants are zero, κ2 = κ3 = κ4 = ... = 0.
• The constant random variables X = μ. Every cumulant is just μ times the corresponding cumulant of the constant random variable X = 1. The derivative of the cumulant generating function is g '(t) = μ. The first cumulant is κ1 = g '(0) = μ and the other cumulants are zero, κ2 = κ3 = κ4 = ... = 0. So the derivative of cumulant generating functions is a generalization of the real constants.
• The Bernoulli distributions, (number of successes in one trial with probability p of success). The special case p = 1 is the constant random variable X = 1. The derivative of the cumulant generating function is g '(t) = ((p −1−1)·et + 1)−1. The first cumulants are κ1 = g '(0) = p and κ2 = g ' '(0) = p·(1 − p) . The cumulants satisfy a recursion formula $\kappa_{n+1}=p (1-p) \frac{d\kappa_n}{dp}.\,$
• The geometric distributions, (number of failures before one success with probability p of success on each trial). The derivative of the cumulant generating function is g '(t) = ((1 − p)−1·et − 1)−1. The first cumulants are κ1 = g '(0) = p−1 − 1, and κ2 = g ' '(0) = κ1·p − 1. Substituting p = (μ+1)−1 gives g '(t) = ((μ−1 + 1)·et − 1)−1 and κ1 = μ.
• The Poisson distributions. The derivative of the cumulant generating function is g '(t) = μ·et. All cumulants are equal to the parameter: κ1 = κ2 = κ3 = ...=μ.
• The binomial distributions, (number of successes in n independent trials with probability p of success on each trial). The special case n = 1 is a Bernoulli distribution. Every cumulant is just n times the corresponding cumulant of the corresponding Bernoulli distribution. The derivative of the cumulant generating function is g '(t) = n·((p−1−1)·et + 1)−1. The first cumulants are κ1 = g '(0) = n·p and κ2 = g ' '(0) = κ1·(1−p). Substituting p = μ·n−1 gives g '(t) = ((μ−1 − n−1)·et + n−1)−1 and κ1 = μ. The limiting case n−1 = 0 is a Poisson distribution.
• The negative binomial distributions, (number of failures before n successes with probability p of success on each trial). The special case n = 1 is a geometric distribution. Every cumulant is just n times the corresponding cumulant of the corresponding geometric distribution. The derivative of the cumulant generating function is g '(t) = n·((1−p)−1·et−1)−1. The first cumulants are κ1 = g '(0) = n·(p−1−1), and κ2 = g ' '(0) = κ1·p−1. Substituting p = (μ·n−1+1)−1 gives g '(t) = ((μ−1+n−1)·etn−1)−1 and κ1 = μ. Comparing these formulas to those of the binomial distributions explains the name 'negative binomial distribution'. The limiting case n−1 = 0 is a Poisson distribution.

Introducing the variance-to-mean ratio $\varepsilon=\mu^{-1}\sigma^2=\kappa_1^{-1}\kappa_2, \,$

the above probability distributions get a unified formula for the derivative of the cumulant generating function:[citation needed] $g'(t)=\mu\cdot(1+\varepsilon\cdot (e^{-t}-1))^{-1}. \,$

The second derivative is $g''(t)=g'(t)\cdot(1+e^t\cdot (\varepsilon^{-1}-1))^{-1} \,$

confirming that the first cumulant is κ1 = g '(0) = μ and the second cumulant is κ2 = g ' '(0) = μ·ε. The constant random variables X = μ have є = 0. The binomial distributions have ε = 1 − p so that 0 < ε < 1. The Poisson distributions have ε = 1. The negative binomial distributions have ε = p−1 so that ε > 1. Note the analogy to the classification of conic sections by eccentricity: circles ε = 0, ellipses 0 < ε < 1, parabolas ε = 1, hyperbolas ε > 1.

## Cumulants of some continuous probability distributions

• For the normal distribution with expected value μ and variance σ2, the cumulant generating function is g(t) = μt + σ2t2/2. The first and second derivatives of the cumulant generating function are g '(t) = μ + σ2·t and g"(t) = σ2. The cumulants are κ1 = μ, κ2 = σ2, and κ3 = κ4 = ... = 0. The special case σ2 = 0 is a constant random variable X = μ.

## Some properties of the cumulant generating function

The cumulant generating function g(t) is convex. If g(t) is finite for a range t1 < Re(t) < t2 then if t1 < 0 < t2 then g(t) is analytic and infinitely differentiable for t1 < Re(t) < t2. Moreover for t real and t1 < t < t2 g(t) is strictly convex, and g'(t) is strictly increasing.[citation needed]

## Some properties of cumulants

### Invariance and equivariance

The first cumulant is shift-equivariant; all of the others are shift-invariant. This means that, if we denote by κn(X) the nth cumulant of the probability distribution of the random variable X, then for any constant c:

• $\kappa_1(X + c) = \kappa_1(X) + c ~ \text{ and}$
• $\kappa_n(X + c) = \kappa_n(X) ~ \text{ for } ~ n \ge 2.$

In other words, shifting a random variable (adding c) shifts the first cumulant (the mean) and doesn't affect any of the others.

### Homogeneity

The nth cumulant is homogeneous of degree n, i.e. if c is any constant, then $\kappa_n(cX)=c^n\kappa_n(X). \,$

If X and Y are independent random variables then κn(X + Y) = κn(X) + κn(Y).

### A negative result

Given the results for the cumulants of the normal distribution, it might be hoped to find families of distributions for which κm = κm+1 = ... = 0 for some m > 3, with the lower-order cumulants (orders 3 to m − 1) being non-zero. There are no such distributions. The underlying result here is that the cumulant generating function cannot be a finite-order polynomial of degree greater than 2.

### Cumulants and moments

The moment generating function is: $1+\sum_{n=1}^\infty \frac{\mu'_n t^n}{n!}=\exp\left(\sum_{n=1}^\infty \frac{\kappa_n t^n}{n!}\right) = \exp(g(t)).$

So the cumulant generating function is the logarithm of the moment generating function. The first cumulant is the expected value; the second and third cumulants are respectively the second and third central moments (the second central moment is the variance); but the higher cumulants are neither moments nor central moments, but rather more complicated polynomial functions of the moments.

The cumulants are related to the moments by the following recursion formula: $\kappa_n=\mu'_n-\sum_{m=1}^{n-1}{n-1 \choose m-1}\kappa_m \mu_{n-m}'.$

The nth moment μ′n is an nth-degree polynomial in the first n cumulants: $\mu'_1=\kappa_1\,$ $\mu'_2=\kappa_2+\kappa_1^2\,$ $\mu'_3=\kappa_3+3\kappa_2\kappa_1+\kappa_1^3\,$ $\mu'_4=\kappa_4+4\kappa_3\kappa_1+3\kappa_2^2+6\kappa_2\kappa_1^2+\kappa_1^4\,$ $\mu'_5=\kappa_5+5\kappa_4\kappa_1+10\kappa_3\kappa_2 +10\kappa_3\kappa_1^2+15\kappa_2^2\kappa_1 +10\kappa_2\kappa_1^3+\kappa_1^5\,$ $\mu'_6=\kappa_6+6\kappa_5\kappa_1+15\kappa_4\kappa_2+15\kappa_4\kappa_1^2 +10\kappa_3^2+60\kappa_3\kappa_2\kappa_1+20\kappa_3\kappa_1^3+15\kappa_2^3 +45\kappa_2^2\kappa_1^2+15\kappa_2\kappa_1^4+\kappa_1^6.\,$

The coefficients are precisely those that occur in Faà di Bruno's formula.

The "prime" distinguishes the moments μ′n from the central moments μn. To express the central moments as functions of the cumulants, just drop from these polynomials all terms in which κ1 appears as a factor: $\mu_1=0\,$ $\mu_2=\kappa_2\,$ $\mu_3=\kappa_3\,$ $\mu_4=\kappa_4+3\kappa_2^2\,$ $\mu_5=\kappa_5+10\kappa_3\kappa_2\,$ $\mu_6=\kappa_6+15\kappa_4\kappa_2+10\kappa_3^2+15\kappa_2^3.\,$

Likewise, the nth cumulant κn is an nth-degree polynomial in the first n non-central moments: $\kappa_1=\mu'_1\,$ $\kappa_2=\mu'_2-{\mu'_1}^2\,$ $\kappa_3=\mu'_3-3\mu'_2\mu'_1+2{\mu'_1}^3\,$ $\kappa_4=\mu'_4-4\mu'_3\mu'_1-3{\mu'_2}^2+12\mu'_2{\mu'_1}^2-6{\mu'_1}^4\,$ $\kappa_5=\mu'_5-5\mu'_4\mu'_1-10\mu'_3\mu'_2+20\mu'_3{\mu'_1}^2+30{\mu'_2}^2\mu'_1-60\mu'_2{\mu'_1}^3+24{\mu'_1}^5\,$ $\kappa_6=\mu'_6-6\mu'_5\mu'_1-15\mu'_4\mu'_2+30\mu'_4{\mu'_1}^2-10{\mu'_3}^2+120\mu'_3\mu'_2\mu'_1-120\mu'_3{\mu'_1}^3+30{\mu'_2}^3-270{\mu'_2}^2{\mu'_1}^2+360\mu'_2{\mu'_1}^4-120{\mu'_1}^6\,.$

To express the cumulants κn for n > 1 as functions of the central moments, drop from these polynomials all terms in which μ'1 appears as a factor: $\kappa_1=\mu'_1\,$ $\kappa_2=\mu_2\,$ $\kappa_3=\mu_3\,$ $\kappa_4=\mu_4-3\mu_2^2\,$ $\kappa_5=\mu_5-10\mu_3\mu_2\,$ $\kappa_6=\mu_6-15\mu_4\mu_2-10\mu_3^2+30\mu_2^3\,.$

### Cumulants and set-partitions

These polynomials have a remarkable combinatorial interpretation: the coefficients count certain partitions of sets. A general form of these polynomials is $\mu'_n=\sum_\pi \prod_{B\in\pi}\kappa_{\left|B\right|}$

where

• π runs through the list of all partitions of a set of size n;
• "B $\in$ π" means B is one of the "blocks" into which the set is partitioned; and
• |B| is the size of the set B.

Thus each monomial is a constant times a product of cumulants in which the sum of the indices is n (e.g., in the term κ3 κ22 κ1, the sum of the indices is 3 + 2 + 2 + 1 = 8; this appears in the polynomial that expresses the 8th moment as a function of the first eight cumulants). A partition of the integer n corresponds to each term. The coefficient in each term is the number of partitions of a set of n members that collapse to that partition of the integer n when the members of the set become indistinguishable.

## Joint cumulants

The joint cumulant of several random variables X1, ..., Xn is defined by a similar cumulant generating function $g(t_1,t_2,\dots,t_n)=\log\left(E\left(\exp\left(\sum_{j=1}^n t_j X_j\right)\right)\right).$

A consequence is that $\kappa(X_1,\dots,X_n) =\sum_\pi (|\pi|-1)!(-1)^{|\pi|-1}\prod_{B\in\pi}E\left(\prod_{i\in B}X_i\right)$

where π runs through the list of all partitions of { 1, ..., n }, B runs through the list of all blocks of the partition π, and |π| is the number of parts in the partition. For example, $\kappa(X,Y,Z)=E(XYZ)-E(XY)E(Z)-E(XZ)E(Y)-E(YZ)E(X)+2E(X)E(Y)E(Z).\,$

If any of these random variables are identical, e.g. if X = Y, then the same formulae apply, e.g. $\kappa(X,X,Z)=E(X^2Z)-2E(XZ)E(X)-E(X^2)E(Z)+2E(X)^2E(Z),\,$

although for such repeated variables there are more concise formulae. For zero-mean random vectors, $\kappa(X,Y,Z)=E(XYZ).\,$ $\kappa(X,Y,Z,W) = E(XYZW)-E(XY)E(ZW)-E(XZ)E(YW)-E(XW)E(YZ).\,$

The joint cumulant of just one random variable is its expected value, and that of two random variables is their covariance. If some of the random variables are independent of all of the others, then any cumulant involving two (or more) independent random variables is zero. If all n random variables are the same, then the joint cumulant is the nth ordinary cumulant.

The combinatorial meaning of the expression of moments in terms of cumulants is easier to understand than that of cumulants in terms of moments: $E(X_1\cdots X_n)=\sum_\pi\prod_{B\in\pi}\kappa(X_i : i \in B).$

For example: $E(XYZ)=\kappa(X,Y,Z)+\kappa(X,Y)\kappa(Z)+\kappa(X,Z)\kappa(Y) +\kappa(Y,Z)\kappa(X)+\kappa(X)\kappa(Y)\kappa(Z).\,$

Another important property of joint cumulants is multilinearity: $\kappa(X+Y,Z_1,Z_2,\dots)=\kappa(X,Z_1,Z_2,\dots)+\kappa(Y,Z_1,Z_2,\dots).\,$

Just as the second cumulant is the variance, the joint cumulant of just two random variables is the covariance. The familiar identity $\operatorname{var}(X+Y)=\operatorname{var}(X) +2\operatorname{cov}(X,Y)+\operatorname{var}(Y)\,$

generalizes to cumulants: $\kappa_n(X+Y)=\sum_{j=0}^n {n \choose j} \kappa(\,\underbrace{X,\dots,X}_j,\underbrace{Y,\dots,Y}_{n-j}\,).\,$

### Conditional cumulants and the law of total cumulance

The law of total expectation and the law of total variance generalize naturally to conditional cumulants. The case n = 3, expressed in the language of (central) moments rather than that of cumulants, says $\mu_3(X)=E(\mu_3(X\mid Y))+\mu_3(E(X\mid Y)) +3\,\operatorname{cov}(E(X\mid Y),\operatorname{var}(X\mid Y)).$

The general result stated below first appeared in 1969 in The Calculation of Cumulants via Conditioning by David R. Brillinger in volume 21 of Annals of the Institute of Statistical Mathematics, pages 215–218.

In general, we have $\kappa(X_1,\dots,X_n)=\sum_\pi \kappa(\kappa(X_{\pi_1}\mid Y),\dots,\kappa(X_{\pi_b}\mid Y))$

where

• the sum is over all partitions π of the set { 1, ..., n } of indices, and
• π1, ..., πb are all of the "blocks" of the partition π; the expression κ(Xπm) indicates that the joint cumulant of the random variables whose indices are in that block of the partition.

## Relation to statistical physics

In statistical physics many extensive quantities – that is quantities that are proportional to the volume or size of a given system – are related to cumulants of random variables. The deep connection is that in a large system an extensive quantity like the energy or number of particles can be thought of as the sum of (say) the energy associated with a number of nearly independent regions. The fact that the cumulants of these nearly independent random variables will (nearly) add make it reasonable that extensive quantities should be expected to be related to cumulants.

A system in equilibrium with a thermal bath at temperature T can occupy states of energy E. The energy E can be considered a random variable, having the probability density. The partition function of the system is $Z(\beta) = \langle\exp(-\beta E)\rangle,\,$

where β = 1/(kT) and k is Boltzmann's constant and the notation $\langle A \rangle$ has been used rather than $\operatorname{E}(A)$ for the expectation value to avoid confusion with the energy, E. The Helmholtz free energy is then $F(\beta) = -\beta^{-1}\log Z \,$

and is clearly very closely related to the cumulant generating function for the energy. The free energy gives access to all of the thermodynamics properties of the system via its first second and higher order derivatives, such as its internal energy, entropy, and specific heat. Because of the relationship between the free energy and the cumulant generating function, all these quantities are related to cumulants e.g. the energy and specific heat are given by $E = \langle E \rangle_c$ $C= dE/dT = k \beta^2\langle E^2 \rangle_c = k \beta^2(\langle E^2\rangle - \langle E\rangle ^2)$

and $\langle E^2\rangle_c$ symbolizes the second cumulant of the energy. Other free energy is often also a function of other variables such as the magnetic field or chemical potential μ, e.g. $\Omega=-\beta^{-1}\log(\langle \exp(-\beta E -\beta\mu N) \rangle),\,$

where N is the number of particles and Ω is the grand potential. Again the close relationship between the definition of the free energy and the cumulant generating function implies that various derivatives of this free energy can be written in terms of joint cumulants of E and N.

## History

The history of cumulants is discussed by Hald.

Cumulants were first introduced by Thorvald N. Thiele, in 1889, who called them semi-invariants. They were first called cumulants in a 1932 paper by Ronald Fisher and John Wishart. Fisher was publicly reminded of Thiele's work by Neyman, who also notes previous published citations of Thiele brought to Fisher's attention. Stephen Stigler has said[citation needed] that the name cumulant was suggested to Fisher in a letter from Harold Hotelling. In a paper published in 1929,[citation needed] Fisher had called them cumulative moment functions. The partition function in statistical physics was introduced by Josiah Willard Gibbs in 1901.[citation needed] The free energy is often called Gibbs free energy. In statistical mechanics, cumulants are also known as Ursell functions relating to a publication in 1927.[citation needed]

## Cumulants in generalized settings

### Formal cumulants

More generally, the cumulants of a sequence { mn : n = 1, 2, 3, ... }, not necessarily the moments of any probability distribution, are given by[citation needed] $1+\sum_{n=1}^\infty m_n t^n/n!=\exp\left(\sum_{n=1}^\infty\kappa_n t^n/n!\right) ,$

where the values of κn for n = 1, 2, 3, ... are found formally, i.e., by algebra alone, in disregard of questions of whether any series converges. All of the difficulties of the "problem of cumulants" are absent when one works formally. The simplest example is that the second cumulant of a probability distribution must always be nonnegative, and is zero only if all of the higher cumulants are zero. Formal cumulants are subject to no such constraints.

### Bell numbers

In combinatorics, the nth Bell number is the number of partitions of a set of size n. All of the cumulants of the sequence of Bell numbers are equal to 1.[citation needed] The Bell numbers are the moments of the Poisson distribution with expected value 1.[citation needed]

### Cumulants of a polynomial sequence of binomial type

For any sequence { κn : n = 1, 2, 3, ... } of scalars in a field of characteristic zero, being considered formal cumulants, there is a corresponding sequence { μ ′ : n = 1, 2, 3, ... } of formal moments, given by the polynomials above.[clarification needed][citation needed] For those polynomials, construct a polynomial sequence in the following way. Out of the polynomial \begin{align} \mu'_6 & = \kappa_6+6\kappa_5\kappa_1+15\kappa_4\kappa_2+15\kappa_4\kappa_1^2 +10\kappa_3^2+60\kappa_3\kappa_2\kappa_1 \\[6pt] & {}\quad + 20\kappa_3\kappa_1^3+15\kappa_2^3 +45\kappa_2^2\kappa_1^2+15\kappa_2\kappa_1^4+\kappa_1^6 \end{align}

make a new polynomial in these plus one additional variable x: \begin{align}p_6(x) & = \kappa_6 \,x + (6\kappa_5\kappa_1 + 15\kappa_4\kappa_2 + 10\kappa_3^2)\,x^2 +(15\kappa_4\kappa_1^2+60\kappa_3\kappa_2\kappa_1+15\kappa_2^3)\,x^3 \\[6pt] & {}\quad +(45\kappa_2^2\kappa_1^2)\,x^4+(15\kappa_2\kappa_1^4)\,x^5 +(\kappa_1^6)\,x^6, \end{align}

and then generalize the pattern. The pattern is that the numbers of blocks in the aforementioned partitions are the exponents on x. Each coefficient is a polynomial in the cumulants; these are the Bell polynomials, named after Eric Temple Bell.[citation needed]

This sequence of polynomials is of binomial type. In fact, no other sequences of binomial type exist; every polynomial sequence of binomial type is completely determined by its sequence of formal cumulants.[citation needed]

### Free cumulants

In the identity[clarification needed] $E(X_1\cdots X_n)=\sum_\pi\prod_{B\in\pi}\kappa(X_i : i\in B)$

one sums over all partitions of the set { 1, ..., n }. If instead, one sums only over the noncrossing partitions, then one gets "free cumulants" rather than conventional cumulants treated above.[clarification needed] These play a central role in free probability theory. In that theory, rather than considering independence of random variables, defined in terms of Cartesian products of algebras of random variables, one considers instead "freeness" of random variables, defined in terms of free products of algebras rather than Cartesian products of algebras.[citation needed]

The ordinary cumulants of degree higher than 2 of the normal distribution are zero. The free cumulants of degree higher than 2 of the Wigner semicircle distribution are zero. This is one respect in which the role of the Wigner distribution in free probability theory is analogous to that of the normal distribution in conventional probability theory.

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• cumulant — ˈkyümyələnt, ÷ mə noun ( s) Etymology: Latin cumulant , cumulans, present participle of cumulare to heap up : any of the statistical coefficients that arise in the series expansion in powers of x of the logarithm of the moment generating function …   Useful english dictionary

• cumulant — noun Any of a set of parameters of a one dimensional probability distribution of a certain form Syn: semi invariant …   Wiktionary

• cumulant — cu·mu·lant …   English syllables

• Multiset — This article is about the mathematical concept. For the computer science data structure, see Set (computer science)#Multiset. In mathematics, the notion of multiset (or bag) is a generalization of the notion of set in which members are allowed to …   Wikipedia

• Natural exponential family — In probability and statistics, the natural exponential family (NEF) is a class of probability distributions that is a special case of an exponential family (EF). Many common distributions are members of a natural exponential family, and the use… …   Wikipedia

• Cumulants (statistiques) — Dans la théorie des probabilités et en statistiques, une variable aléatoire X a une espérance mathématique μ = E(X) et une variance σ2 = E((X − μ)2). Ce sont les deux premiers cumulants : μ = κ1 et σ2 = κ2. Les cumulants κn sont définis par… …   Wikipédia en Français

• Skewness — Example of experimental data with non zero (positive) skewness (gravitropic response of wheat coleoptiles, 1,790) In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real valued random …   Wikipedia

• Law of total cumulance — In probability theory and mathematical statistics, the law of total cumulance is a generalization to cumulants of the law of total probability, the law of total expectation, and the law of total variance. It has applications in the analysis of… …   Wikipedia

• Dynamic light scattering — Hypothetical Dynamic light scattering of two samples: Larger particles on the top and smaller particle on the bottom Dynamic light scattering (also known as photon correlation spectroscopy or quasi elastic light scattering) is a technique in… …   Wikipedia

• Central moment — In probability theory and statistics, central moments form one set of values by which the properties of a probability distribution can be usefully characterised. Central moments are used in preference to ordinary moments because then the values… …   Wikipedia