Covariance and contravariance of vectors

Covariance and contravariance of vectors
For other uses of "covariant" or "contravariant", see covariance and contravariance.

In multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis from one coordinate system to another. When one coordinate system is just a rotation of the other, this distinction is invisible. However, when considering more general coordinate systems such as skew coordinates, curvilinear coordinates, and coordinate systems on differentiable manifolds, the distinction becomes critically important.

• For a vector (such as a direction vector or velocity vector) to be coordinate system invariant, the components of the vector must contra-vary with a change of basis to compensate. That is, the components must vary in the opposite way (the inverse transformation) as the change of basis. Vectors (as opposed to dual vectors) are said to be contravariant. Examples of contravariant vectors include the position of an object relative to an observer, or any derivative of position with respect to time, including velocity, acceleration, and jerk. In Einstein notation, contravariant components have upper indices as in
$\mathbf{v} = v^i \mathbf{e}_i \,$
• For a dual vector, (such as a gradient) to be coordinate system invariant, the components of the vector must co-vary with a change of basis to maintain the same meaning. That is, the components must vary by the same transformation as the change of basis. Dual vectors (as opposed to vectors) are said to be covariant. Examples of covariant vectors generally appear when taking a gradient of a function (effectively dividing by a vector). In Einstein notation, covariant components have lower indices as in
$\mathbf{v} = v_i \mathbf{e}^i \,$

In physics, vectors often have units of distance or distance times some other unit (such as the velocity), whereas covectors have units the inverse of distance or the inverse of distance times some other unit. The distinction between covariant and contravariant vectors is particularly important for computations with tensors, which often have mixed variance. This means that they have both covariant and contravariant components, or both vectors and dual vectors. The valence or type of a tensor is the number of variant and covariant terms. The duality between covariance and contravariance intervenes whenever a vector or tensor quantity is represented by its components, although modern differential geometry uses more sophisticated index-free methods to represent tensors.

The terms covariant and contravariant were introduced by J.J. Sylvester in 1853 in order to study algebraic invariant theory. In this context, for instance, a system of simultaneous equations is contravariant in the variables. The use of both terms in the modern context of multilinear algebra is a specific example of corresponding notions in category theory.

Introduction

In physics, a vector typically arises as the outcome of a measurement or series of measurements, and is represented as a list (or tuple) of numbers such as

$(v_1,v_2,v_3). \,$

This list of numbers depends on the choice of coordinate system. For instance, if the vector represents position with respect to an observer (position vector), then the coordinate system may be obtained from a system of rigid rods, or reference axes, along which the components v1, v2, and v3 are measured. For a vector to represent a geometric object, it must be possible to describe how it looks in any other coordinate system. That is to say, the components of the vectors will transform in a certain way in passing from one coordinate system to another.

A contravariant vector is required to have components that "transform in the same way as the coordinates" (the opposite way as the reference axes) under changes of coordinates such as rotation and dilation. The vector itself does not change under these operations; instead, the components of the vector make a change that cancels the change in the spatial axes, in the same way that co-ordinates change. In other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction, the components of the vector, like the co-ordinates, would reduce in an exactly compensating way. Mathematically, if the coordinate system undergoes a transformation described by an invertible matrix M, so that a coordinate vector x is transformed to x′ = Mx, then a contravariant vector v must be similarly transformed via v′ = Mv. This important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities. For example, if v consists of the x, y, and z-components of velocity, then v is a contravariant vector: if the coordinates of space are stretched, rotated, or twisted, then the components of the velocity transform in the same way. On the other hand, for instance, a triple consisting of the length, width, and height of a rectangular box could make up the three components of an abstract vector, but this vector would not be contravariant, since rotating the box does not change the box's length, width, and height. Examples of contravariant vectors include displacement, velocity, momentum, force, and acceleration.

By contrast, a covariant vector has components that change oppositely to the coordinates or, equivalently, transform like the reference axes. For instance, the components of the gradient vector of a function

$\nabla f = \frac{\partial f}{\partial x_1}\widehat{x}_1+\frac{\partial f}{\partial x_2}\widehat{x}_2+\frac{\partial f}{\partial x_3}\widehat{x}_3$

transform like the reference axes themselves. When only rotations of the spatial are considered, the components of contravariant and covariant vectors behave in the same way. It is only when other transformations are allowed that the difference becomes apparent.

Definition

The general formulation of covariance and contravariance refers to how the components of a coordinate vector transform under a change of basis (passive transformation). Thus let V be a vector space of dimension n over the field of scalars S, and let each of f = (X1,...,Xn) and f' = (Y1,...,Yn) be a basis of V.[note 1] Also, let the change of basis from f to f′ be given by

$\mathbf{f}\mapsto \mathbf{f}' = \left(\sum_i a^i_1X_i,\dots,\sum_i a^i_nX_i\right) = \mathbf{f}A$

(1)

for some invertible n×n matrix A with entries $a^i_j$. Here, each vector Yj of the f' basis is a linear combination of the vectors Xi of the f basis, so that

$Y_j=\sum_i a^i_jX_i.$

Contravariant transformation

A vector v in V is expressed uniquely as a linear combination of the elements of the f basis as

$v = \sum_i v^i[\mathbf{f}]X_i,$

(2)

where v i[f] are scalars in S known as the components of v in the f basis. Denote the column vector of components of v by v[f]:

$\mathbf{v}[\mathbf{f}] = \begin{bmatrix}v^1[\mathbf{f}]\\v^2[\mathbf{f}]\\\vdots\\v^n[\mathbf{f}]\end{bmatrix}$

so that (2) can be rewritten as a matrix product

$v = \mathbf{f}\, \mathbf{v}[\mathbf{f}].$

The vector v may also be expressed in terms of the f' basis, so that

$v = \mathbf{f'}\, \mathbf{v}[\mathbf{f'}].$

However, since the vector v itself is invariant under the choice of basis,

$\mathbf{f}\, \mathbf{v}[\mathbf{f}] = v = \mathbf{f'}\, \mathbf{v}[\mathbf{f'}].$

The invariance of v combined with the relationship (1) between f and f' implies that

$\mathbf{f}\, \mathbf{v}[\mathbf{f}] = \mathbf{f}A\, \mathbf{v}[\mathbf{f}A],$

giving the transformation rule

$\mathbf{v}[\mathbf{f}A] = A^{-1}\mathbf{v}[\mathbf{f}].$

In terms of components,

$v^i[\mathbf{f}A] = \sum_j \tilde{a}^i_jv^j[\mathbf{f}]$

where the coefficients $\tilde{a}^i_j$ are the entries of the inverse matrix of A.

Because the components of the vector v transform with the inverse of the matrix A, these components are said to transform contravariantly under a change of basis.

The way A relates the two pairs is depicted in the following informal diagram using an arrow. The reversal of the arrow indicates a contravariant change:

$\mathbf{f}\longrightarrow \mathbf{f'}$
$v[\mathbf{f}]\longleftarrow v[\mathbf{f'}]$

Covariant transformation

A linear functional α on V is expressed uniquely in terms of its components (scalars in S) in the f basis as

$\alpha_i[\mathbf{f}] = \alpha(X_i), \quad i=1,2,\dots,n.$

These components are the action of α on the basis vectors Xi of the f basis.

Under the change of basis from f to f' (1), the components transform so that

$\begin{array} {rcl} \alpha_i[\mathbf{f}A] & = & \alpha(Y_i) \\ & = & \alpha\left(\sum_j a^j_i X_j\right) \\ & = & \sum_j a^j_i \alpha(X_j) \\ & = & \sum_j a^j_i \alpha_j[\mathbf{f}] \end{array}.$

(3)

Denote the row vector of components of α by α[f]:

$\mathbf{\alpha}[\mathbf{f}] = \begin{bmatrix}\alpha_1[\mathbf{f}]&\alpha_2[\mathbf{f}]&\dots&\alpha_n[\mathbf{f}]\end{bmatrix}$

so that (3) can be rewritten as the matrix product

$\alpha[\mathbf{f}A] = \alpha[\mathbf{f}]A.$

Because the components of the linear functional α transform with the matrix A, these components are said to transform covariantly under a change of basis.

The way A relates the two pairs is depicted in the following informal diagram using an arrow. A covariant relationship is indicated since the arrows travel in the same direction:

$\mathbf{f}\longrightarrow \mathbf{f'}$
$\alpha[\mathbf{f}]\longrightarrow \alpha[\mathbf{f'}]$

Had a column vector representation been used instead, the transformation law would be the transpose

$\alpha^\mathrm{T}[\mathbf{f}A] = A^\mathrm{T}\alpha^\mathrm{T}[\mathbf{f}].$

Coordinates

The choice of basis f on the vector space V defines uniquely a set of coordinate functions on V, by means of

$x^i[\mathbf{f}](v) = v^i[\mathbf{f}].$

The coordinates on V are therefore contravariant in the sense that

$x^i[\mathbf{f}A] = \sum_{k=1}^n \tilde{a}^i_kx^k[\mathbf{f}].$

Conversely, a system of n quantities vi that transform like the coordinates xi on V defines a contravariant vector. A system of n quantities that transform oppositely to the coordinates is then a covariant vector.

This formulation of contravariance and covariance is often more natural in applications in which there is a coordinate space (a manifold) on which vectors live as tangent vectors or cotangent vectors. Given a local coordinate system xi on the manifold, the reference axes for the coordinate system are the vector fields

$X_1 = \frac{\partial}{\partial x^1},\dots,X_n=\frac{\partial}{\partial x^n}.$

This gives rise to the frame f = (X1,...,Xn) at every point of the coordinate patch.

If yi is a different coordinate system and

$Y_1=\frac{\partial}{\partial y^1},\dots,Y_n=\frac{\partial}{\partial y^n},$

then the frame f' is related to the frame f by the inverse of the Jacobian matrix of the coordinate transition:

$\mathbf{f}' = \mathbf{f}J^{-1},\quad J=\left(\frac{\partial y^i}{\partial x^j}\right)_{i,j=1}^n.$

Or, in indices,

$\frac{\partial}{\partial y^i} = \sum_{j=1}^n\frac{\partial x^j}{\partial y^i}\frac{\partial}{\partial x^j}.$

A tangent vector is by definition a vector that is a linear combination of the coordinate partials $\partial/\partial x^i$. Thus a tangent vector is defined by

$v = \sum_{i=1}^n v^i[\mathbf{f}] X_i = \mathbf{f}\ \mathbf{v}[\mathbf{f}].$

Such a vector is contravariant with respect to change of frame. Under changes in the coordinate system, one has

$\mathbf{v}[\mathbf{f}'] = \mathbf{v}[\mathbf{f}J^{-1}] = J\, \mathbf{v}[\mathbf{f}].$

Therefore the components of a tangent vector transform via

$v^i[\mathbf{f}'] = \sum_{j=1}^n \frac{\partial y^i}{\partial x^j}v^j[\mathbf{f}].$

Accordingly, a system of n quantities vi depending on the coordinates that transform in this way on passing from one coordinate system to another is called a contravariant vector.

Covariant and contravariant components of a vector

In a Euclidean space V, there is little distinction between covariant and contravariant vectors, because the dot product allows for covectors to be identified with vectors. That is, a vector v determines uniquely a covector α via

$\alpha(w) = v\cdot w \,$

for all vectors w. Conversely, each covector α determines a unique vector v by this equation. Because of this identification of vectors with covectors, one may speak of the covariant components or contravariant components of a vector, that is, they are just representations of the same vector using reciprocal bases.

Given a basis f = (X1,...,Xn) of V, there is a unique reciprocal basis f# = (Y1,...,Yn) of V determined by requiring

$Y^i \cdot X_j = \delta^i_j,$

the Kronecker delta. In terms of these bases, any vector v can be written in two ways:

\begin{align} v &= \sum_i v^i[\mathbf{f}]X_i = \mathbf{f}\,\mathbf{v}[\mathbf{f}]\\ &=\sum_i v_i[\mathbf{f}]Y^i = \mathbf{f}^\sharp\mathbf{v}^\sharp[\mathbf{f}]. \end{align}

The components vi[f] are the contravariant components of the vector v in the basis f, and the components vi[f] are the covariant components of v in the basis f. The terminology is justified because under a change of basis,

$\mathbf{v}[\mathbf{f}A] = A^{-1}\mathbf{v}[\mathbf{f}],\quad \mathbf{v}^\sharp[\mathbf{f}A] = A^T\mathbf{v}^\sharp[\mathbf{f}].$
The contravariant components of a vector are obtained by projecting onto the coordinate axes. The covariant components are obtained by projecting onto the normal lines to the coordinate hyperplanes.

Euclidean plane

In the Euclidean plane, the dot product allows for vectors to be identified with vectors. If $\mathbf{e}_1,\mathbf{e}_2$ is a basis, then the dual basis $\mathbf{e}^1,\mathbf{e}^2$ satisfies

\begin{align} \mathbf{e}^1\cdot\mathbf{e}_1=1, &\quad\mathbf{e}^1\cdot\mathbf{e}_2=0\\ \mathbf{e}^2\cdot\mathbf{e}_1=0, &\quad \mathbf{e}^2\cdot\mathbf{e}_2=1. \end{align}

Thus, e1 and e2 are perpendicular to each other, as are e2 and e1, and the lengths of e1 and e2 normalized against e1 and e2, respectively.

Example

For example,[1] suppose that we are given a basis e1, e2 consisting of a pair of vectors making a 45° angle with one another, such that e1 has length 2 and e2 has length 1. Then the dual basis vectors are given as follows:

• e2 is the result of rotating e1 through an angle of 90° (where the sense is measured by assuming the pair e1, e2 to be positively oriented), and then rescaling so that e2e2 = 1 holds.
• e1 is the result of rotating e2 through an angle of 90°, and then rescaling so that e1e1 = 1 holds.

Applying these rules, we find

$\mathbf{e}^1 = \frac{1}{2}\mathbf{e}_1 - \frac{1}{\sqrt{2}}\mathbf{e}_2$

and

$\mathbf{e}^2 = -\frac{1}{\sqrt{2}}\mathbf{e}_1+2\mathbf{e}_2.$

Thus the change of basis matrix in going from the original basis to the reciprocal basis is

$R = \begin{bmatrix}1/2 & -1/\sqrt{2}\\ -1/\sqrt{2} & 2 \end{bmatrix},$

since

$[\mathbf{e}^1\ \mathbf{e}^2] = [\mathbf{e}_1\ \mathbf{e}_2]\begin{bmatrix}1/2 & -1/\sqrt{2}\\ -1/\sqrt{2} & 2 \end{bmatrix}.$

For instance, the vector

$v = \frac{3}{2}\mathbf{e}_1 + 2\mathbf{e}_2$

is a vector with contravariant components

$v^1 = \frac{3}{2},\quad v^2 = 2.$

The covariant components are obtained by equating the two expressions for the vector v:

$v = v_1\mathbf{e}^1 + v_2\mathbf{e}^2 = v^1\mathbf{e}_1+v^2\mathbf{e}_2$

so

\begin{align} \begin{bmatrix}v_1\\ v_2\end{bmatrix} &= R^{-1}\begin{bmatrix}v^1\\ v^2\end{bmatrix} \\ &= \begin{bmatrix}4&\sqrt{2}\\ \sqrt{2}&1\end{bmatrix}\begin{bmatrix}v^1\\ v^2\end{bmatrix} = \begin{bmatrix}6+2\sqrt{2}\\2+3/\sqrt{2}\end{bmatrix}\end{align}.

Three-dimensional Euclidean space

In the three-dimensional Euclidean space, one can also determine explicitly the dual basis to a given set of basis vectors e1, e2, e3 of E3 that are not necessarily assumed to be orthogonal nor of unit norm. The contravariant (dual) basis vectors are:

$\mathbf{e}^1 = \frac{\mathbf{e}_2 \times \mathbf{e}_3}{\mathbf{e}_1 \cdot (\mathbf{e}_2 \times \mathbf{e}_3)} ; \qquad \mathbf{e}^2 = \frac{\mathbf{e}_3 \times \mathbf{e}_1}{\mathbf{e}_2 \cdot (\mathbf{e}_3 \times \mathbf{e}_1)}; \qquad \mathbf{e}^3 = \frac{\mathbf{e}_1 \times \mathbf{e}_2}{\mathbf{e}_3 \cdot (\mathbf{e}_1 \times \mathbf{e}_2)}.$

Even when the ei and ei are not orthonormal, they are still mutually dual:

$\mathbf{e}^i \cdot \mathbf{e}_j = \delta^i_j,$

Then the contravariant coordinates of any vector v can be obtained by the dot product of v with the contravariant basis vectors:

$q^1 = \mathbf{v} \cdot \mathbf{e}^1; \qquad q^2 = \mathbf{v} \cdot \mathbf{e}^2; \qquad q^3 = \mathbf{v} \cdot \mathbf{e}^3. \,$

Likewise, the covariant components of v can be obtained from the dot product of v with covariant basis vectors, viz.

$q_1 = \mathbf{v} \cdot \mathbf{e}_1; \qquad q_2 = \mathbf{v} \cdot \mathbf{e}_2; \qquad q_3 = \mathbf{v} \cdot \mathbf{e}_3. \,$

Then v can be expressed in two (reciprocal) ways, viz.

$\mathbf{v} = q_i \mathbf{e}^i = q_1 \mathbf{e}^1 + q_2 \mathbf{e}^2 + q_3 \mathbf{e}^3 \,$

or

$\mathbf{v} = q^i \mathbf{e}_i = q^1 \mathbf{e}_1 + q^2 \mathbf{e}_2 + q^3 \mathbf{e}_3. \,$

Combining the above relations, we have

$\mathbf{v} = (\mathbf{v} \cdot \mathbf{e}_i) \mathbf{e}^i = (\mathbf{v} \cdot \mathbf{e}^i) \mathbf{e}_i \,$

and we can convert from covariant to contravariant basis with

$q_i = \mathbf{v}\cdot \mathbf{e}_i = (q^j \mathbf{e}_j)\cdot \mathbf{e}_i = (\mathbf{e}_j\cdot\mathbf{e}_i) q^j \,$

and

$q^i = \mathbf{v}\cdot \mathbf{e}^i = (q_j \mathbf{e}^j)\cdot \mathbf{e}^i = (\mathbf{e}^j\cdot\mathbf{e}^i) q_j. \,$

The indices of covariant coordinates, vectors, and tensors are subscripts. If the contravariant basis vectors are orthonormal then they are equivalent to the covariant basis vectors, so there is no need to distinguish between the covariant and contravariant coordinates.

General Euclidean spaces

More generally, in an n-dimensional Euclidean space V, if a basis is

$\mathbf{e}_1,\dots,\mathbf{e}_n$,

the reciprocal basis is given by

$\mathbf{e}^i=e^{ij}\mathbf{e}_j$

where the coefficients eij are the entries of the inverse matrix of

$e_{ij} = \mathbf{e}_i\cdot\mathbf{e}_j.$

Indeed, we then have

$\mathbf{e}^i\cdot\mathbf{e}_k=e^{ij}\mathbf{e}_j\cdot\mathbf{e}_k=e^{ij}e_{jk} = \delta^i_k.$

The covariant and contravariant components of any vector

$\mathbf{v} = q_i \mathbf{e}^i = q^i \mathbf{e}_i \,$

are related as above by

$q_i = \mathbf{v}\cdot \mathbf{e}_i = (q^j \mathbf{e}_j)\cdot \mathbf{e}_i = q^je_{ji}$

and

$q^i = \mathbf{v}\cdot \mathbf{e}^i = (q_j\mathbf{e}^j)\cdot \mathbf{e}^i = q_je^{ji}. \,$

Informal usage

In the field of physics, the adjective covariant is often used informally as a synonym for invariant. For example, the Schrödinger equation does not keep its written form under the coordinate transformations of special relativity. Thus, a physicist might say that the Schrödinger equation is not covariant. In contrast, the Klein-Gordon equation and the Dirac equation do keep their written form under these coordinate transformations. Thus, a physicist might say that these equations are covariant.

Despite the dominant usage of "covariant", it is more accurate to say that the Klein-Gordon and Dirac equations are invariant, and that the Schrödinger equation is not invariant. Additionally, to remove ambiguity, the transformation by which the invariance is evaluated should be indicated. Continuing with the above example, neither the Klein-Gordon nor the Dirac equations are universally invariant under any coordinate transformation (e.g. those of general relativity), so unambiguous description of these equations is that they are invariant with respect to the coordinate transformations of special relativity.

Because the components of vectors are contravariant and those of covectors are covariant, the vectors themselves are often referred to as being contravariant and the covectors as covariant. This usage is not universal, however, since vectors push forward – are covariant under diffeomorphism – and covectors pull back – are contravariant under diffeomorphism. See Einstein notation for details.

Use in tensor analysis

The distinction between covariance and contravariance is particularly important for computations with tensors, which often have mixed variance. This means that they have both covariant and contravariant components, or both vector and dual vector components. The valence of a tensor is the number of variant and covariant terms, and in Einstein notation, covariant components have lower indices, while contravariant components have upper indices. The duality between covariance and contravariance intervenes whenever a vector or tensor quantity is represented by its components, although modern differential geometry uses more sophisticated index-free methods to represent tensors.

In tensor analysis, a covariant vector varies more or less reciprocally to a corresponding contravariant vector. Expressions for lengths, areas and volumes of objects in the vector space can then be given in terms of tensors with covariant and contravariant indices. Under simple expansions and contractions of the coordinates, the reciprocity is exact; under affine transformations the components of a vector intermingle on going between covariant and contravariant expression.

On a manifold, a tensor field will typically have multiple indices, of two sorts. By a widely followed convention, covariant indices are written as lower indices, whereas contravariant indices are upper indices. When the manifold is equipped with a metric, covariant and contravariant indices become very closely related to one-another. Contravariant indices can be turned into covariant indices by contracting with the metric tensor. Contravariant indices can be gotten by contracting with the (matrix) inverse of the metric tensor. Note that in general, no such relation exists in spaces not endowed with a metric tensor. Furthermore, from a more abstract standpoint, a tensor is simply "there" and its components of either kind are only calculational artifacts whose values depend on the chosen coordinates.

The explanation in geometric terms is that a general tensor will have contravariant indices as well as covariant indices, because it has parts that live in the tangent bundle as well as the cotangent bundle.

A contravariant vector is one which transforms like $\frac{dx^{\mu}}{d\tau}$, where $x^{\mu} \!$ are the coordinates of a particle at its proper time $\tau \!$. A covariant vector is one which transforms like $\frac{\partial \phi}{\partial x^{\mu}}$, where $\phi \!$ is a scalar field.

Algebra and geometry

In category theory, there are covariant functors and contravariant functors. The dual space of a vector space is a standard example of a contravariant functor. Some constructions of multilinear algebra are of 'mixed' variance, which prevents them from being functors.

In geometry, the same map in/map out distinction is helpful in assessing the variance of constructions. A tangent vector to a smooth manifold M is, to begin with, a curve mapping smoothly into M and passing through a given point P. It is therefore covariant, with respect to smooth mappings of M. A contravariant vector, or 1-form, is in the same way constructed from a smooth mapping from M to the real line, near P. It is in the cotangent bundle, built up from the dual spaces of the tangent spaces. Its components with respect to a local basis of one-forms dxi will be covariant; but one-forms and differential forms in general are contravariant, in the sense that they pull back under smooth mappings. This is crucial to how they are applied; for example a differential form can be restricted to any submanifold, while this does not make the same sense for a field of tangent vectors.

Covariant and contravariant components transform in different ways under coordinate transformations. By considering a coordinate transformation on a manifold as a map from the manifold to itself, the transformation of covariant indices of a tensor are given by a pullback, and the transformation properties of the contravariant indices is given by a pushforward.

Notes

1. ^ A basis f may here profitably be viewed as a linear isomorphism from Rn to V. Regarding f as a row vector whose entries are the elements of the basis, the associated linear isomorphism is then $\mathbf{x}\mapsto \mathbf{f}\mathbf{x}.$

References

1. ^ Bowen, Ray (2008). "Introduction to Vectors and Tensors". Dover. pp. 78, 79, 81.
• Arfken, George B.; Weber, Hans J. (2005)), Mathematical Methods for Physicists (6th ed.), San Diego: Harcourt, ISBN 0-12-059876-0 .
• Dodson, C. T. J.; Poston, T. (1991), Tensor geometry, Graduate Texts in Mathematics, 130 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-52018-4, MR1223091 .
• Greub, Werner Hildbert (1967), Multilinear algebra, Die Grundlehren der Mathematischen Wissenschaften, Band 136, Springer-Verlag New York, Inc., New York, MR0224623 .
• Sternberg, Shlomo (1983), Lectures on differential geometry, New York: Chelsea, ISBN 978-0-8284-0316-0 .
• Sylvester, J.J. (1853), "On a Theory of the Syzygetic Relations of Two Rational Integral Functions, Comprising an Application to the Theory of Sturm's Functions, and That of the Greatest Algebraical Common Measure", Philosophical Transactions of the Royal Society of London (The Royal Society) 143: 407–548, doi:10.1098/rstl.1853.0018, JSTOR 108572 .

Wikimedia Foundation. 2010.

См. также в других словарях:

• Covariance and contravariance — may refer to: Covariance and contravariance of vectors, in mathematics and theoretical physics Covariance and contravariance of functors, in category theory Covariance and contravariance (computer science), whether a type system preserves the… …   Wikipedia

• Covariance (disambiguation) — Covariance may refer to: Covariance, a measure of how much two variables change together Covariance matrix, a matrix of covariances between a number of variables Cross covariance, the covariance between two vectors of variables Autocovariance,… …   Wikipedia

• Comparison of C Sharp and Java — The correct title of this article is Comparison of C# and Java. The substitution or omission of the # sign is because of technical restrictions. Programming language comparisons General comparison Basic syntax Basic instructions …   Wikipedia

• Active and passive transformation — In the physical sciences, an active transformation is one which actually changes the physical state of a system, and makes sense even in the absence of a coordinate system whereas a passive transformation is merely a change in the coordinate… …   Wikipedia

• Euclidean vector — This article is about the vectors mainly used in physics and engineering to represent directed quantities. For mathematical vectors in general, see Vector (mathematics and physics). For other uses, see vector. Illustration of a vector …   Wikipedia

• Curvilinear coordinates — Curvilinear, affine, and Cartesian coordinates in two dimensional space Curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian… …   Wikipedia

• Einstein notation — In mathematics, especially in applications of linear algebra to physics, the Einstein notation or Einstein summation convention is a notational convention useful when dealing with coordinate formulas. It was introduced by Albert Einstein in 1916 …   Wikipedia

• List of mathematics articles (C) — NOTOC C C closed subgroup C minimal theory C normal subgroup C number C semiring C space C symmetry C* algebra C0 semigroup CA group Cabal (set theory) Cabibbo Kobayashi Maskawa matrix Cabinet projection Cable knot Cabri Geometry Cabtaxi number… …   Wikipedia

• Mixed tensor — In tensor analysis, a mixed tensor is a tensor which is neither strictly covariant nor strictly contravariant; at least one of the indices of a mixed tensor will be a subscript (covariant) and at least one of the indices will be a superscript… …   Wikipedia

• Covariant transformation — See also Covariance and contravariance of vectors In physics, a covariant transformation is a rule (specified below), that describes how certain physical entities change under a change of coordinate system. In particular the term is used for… …   Wikipedia

Поделиться ссылкой на выделенное

Прямая ссылка:
Нажмите правой клавишей мыши и выберите «Копировать ссылку»