Eigenvalue, eigenvector and eigenspace

Eigenvalue, eigenvector and eigenspace

In mathematics, given a linear transformation, an Audio|De-eigenvector.ogg|eigenvector of that linear transformation is a nonzero vector which, when that transformation is applied to it, changes in length, but not direction.

For each eigenvector of a linear transformation, there is a corresponding scalar value called an Audio-nohelp|De-eigenvalue.ogg|eigenvalue for that vector, which determines the amount the eigenvector is scaled under the linear transformation. For example, an eigenvalue of +2 means that the eigenvector is doubled in length and points in the same direction. An eigenvalue of +1 means that the eigenvector is unchanged, while an eigenvalue of −1 means that the eigenvector is reversed in direction. An eigenspace of a given transformation for a particular eigenvalue is the set (linear span) of the eigenvectors associated to this eigenvalue, together with the zero vector (which has no direction).

In linear algebra, every linear transformation between finite-dimensional vector spaces can be given by a matrix, which is a rectangular array of numbers arranged in rows and columns. Standard methods for finding eigenvalues, eigenvectors, and eigenspaces of a given matrix are discussed below.

These concepts play a major role in several branches of both pure and applied mathematics — appearing prominently in linear algebra, functional analysis, and to a lesser extent in nonlinear mathematics.

Many kinds of mathematical objects can be treated as vectors: functions, harmonic modes, quantum states, and frequencies, for example. In these cases, the concept of "direction" loses its ordinary meaning, and is given an abstract definition. Even so, if this abstract "direction" is unchanged by a given linear transformation, the prefix "eigen" is used, as in "eigenfunction", "eigenmode", "eigenstate", and "eigenfrequency".

History

Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations.

Euler had also studied the rotational motion of a rigid body and discovered the importance of the principal axes. As Lagrange realized, the principal axes are the eigenvectors of the inertia matrix. [See Harvnb|Hawkins|1975|loc=§2] In the early 19th century, Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions.See Harvnb|Hawkins|1975|loc=§3] Cauchy also coined the term "racine caractéristique" (characteristic root) for what is now called "eigenvalue"; his term survives in "characteristic equation".See Harvnb|Kline|1972|loc=pp. 807-808]

Fourier used the work of Laplace and Lagrange to solve the heat equation by separation of variables in his famous 1822 book "Théorie analytique de la chaleur". [See Harvnb|Kline|1972|loc=p. 673] Sturm developed Fourier's ideas further and he brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that symmetric matrices have real eigenvalues. This was extended by Hermite in 1855 to what are now called Hermitian matrices. Around the same time, Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle, and Clebsch found the corresponding result for skew-symmetric matrices. Finally, Weierstrass clarified an important aspect in the stability theory started by Laplace by realizing that defective matrices can cause instability.

In the meantime, Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called "Sturm-Liouville theory". [See Harvnb|Kline|1972|loc=pp. 715-716] Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later. [See Harvnb|Kline|1972|loc=pp. 706-707]

At the start of the 20th century, Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices. [See Harvnb|Kline|1972|loc=p. 1063] He was the first to use the German word "eigen" to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Helmholtz. "Eigen" can be translated as "own", "peculiar to", "characteristic", or "individual" — emphasizing how important eigenvalues are to defining the unique nature of a specific transformation. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is standard today. [See Harvnb|Aldrich|2006]

The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by John G.F. Francis [J.G.F. Francis, "The QR Transformation, I" (part 1), "The Computer Journal", vol. 4, no. 3, pages 265-271 (1961); "The QR Transformation, II" (part 2), "The Computer Journal", vol. 4, no. 4, pages 332-345 (1962).] [John G.F. Francis (1934 - ), devised the “QR transformation” for computing the eigenvalues of matrices. Born in London in 1934, he presently (2007) resides in Hove, England (near Brighton). In 1954 he worked for the National Research Development Corporation (NRDC). In 1955-1956 he attended Cambridge University. He then returned to the NRDC, where he served as assistant to Christopher Strachey. At this time he devised the QR transformation. In 1961 he left the NRDC to work at Ferranti Corporation, Ltd. and then at the University of Sussex. Subsequently, he had positions with various industrial organizations and consultancies. His interests encompassed artificial intelligence, computer languages, and systems engineering. He is currently retired. (See: http://www-sbras.nsc.ru/mathpub/na-net/db/showfile.phtml?v07n34.html#1 .)] and Vera Kublanovskaya [ Vera N. Kublanovskaya, "On some algorithms for the solution of the complete eigenvalue problem" "USSR Computational Mathematics and Mathematical Physics", vol. 3, pages 637–657 (1961). Also published in: "Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki", vol.1, no. 4, pages 555–570 (1961).] in 1961. [ See Harvnb|Golub|van Loan|1996|loc=§7.3; Harvnb|Meyer|2000|loc=§7.3]

Definitions

Linear transformations of a vector space, such as rotation, reflection, stretching, compression, shear or any combination of these, may be visualized by the effect they produce on vectors. In other words, they are vector functions. More formally, in a vector space "L", a vector function "A" is defined if for each vector x of "L" there corresponds a unique vector y = "A"(x) of "L". For the sake of brevity, the parentheses around the vector on which the transformation is acting are often omitted. A vector function "A" is "linear" if it has the following two properties:
*"Additivity": "A"(x + y) = "A"x + "A"y
*"Homogeneity": "A"(αx) = α"A"xwhere x and y are any two vectors of the vector space "L" and α is any scalar. [See Harvnb|Beezer|2006|loc=Definition LT on p. 507; Harvnb|Strang|2006|loc=p. 117; Harvnb|Kuttler|2007|loc=Definition 5.3.1 on p. 71; Harvnb|Shilov|1977|loc=Section 4.21 on p. 77; Rowland, Todd and Weisstein, Eric W. [http://mathworld.wolfram.com/LinearTransformation.html Linear transformation] From MathWorld − A Wolfram Web Resource] Such a function is variously called a "linear transformation", "linear operator", or "linear endomorphism" on the space "L".

The key equation in this definition is the eigenvalue equation, "A"x = λx. Most vectors x will not satisfy such an equation. A typical vector x changes direction when acted on by "A", so that "A"x is not a multiple of x. This means that only certain special vectors x are eigenvectors, and only certain special numbers λ are eigenvalues. Of course, if "A" is a multiple of the identity matrix, then no vector changes direction, and all non-zero vectors are eigenvectors.

The requirement that the eigenvector be non-zero is imposed because the equation "A"0 = λ0 holds for every "A" and every λ. Since the equation is always trivially true, it is not an interesting case. In contrast, an eigenvalue can be zero in a nontrivial way. Each eigenvector is associated with a specific eigenvalue. One eigenvalue can be associated with several or even with infinite number of eigenvectors.

Geometrically (Fig. 2), the eigenvalue equation means that under the transformation "A" eigenvectors experience only changes in magnitude and sign — the direction of "A"x is the same as that of x. The eigenvalue λ is simply the amount of "stretch" or "shrink" to which a vector is subjected when transformed by "A". If λ = 1, the vector remains unchanged (unaffected by the transformation). A transformation "I" under which a vector x remains unchanged, "I"x = x, is defined as identity transformation. If λ = –1, the vector flips to the opposite direction (rotates to 180°); this is defined as reflection.

If x is an eigenvector of the linear transformation "A" with eigenvalue λ, then any scalar multiple αx is also an eigenvector of "A" with the same eigenvalue. Similarly if more than one eigenvector share the same eigenvalue λ, any linear combination of these eigenvectors will itself be an eigenvector with eigenvalue λ. [For a proof of this lemma, see Harvnb|Shilov|1977|loc=p. 109, and ] . Together with the zero vector, the eigenvectors of A with the same eigenvalue form a linear subspace of the vector space called an "eigenspace".

The eigenvectors corresponding to different eigenvalues are linearly independentFor a proof of this lemma, see Harvnb|Roman|2008|loc=Theorem 8.2 on p. 186; Harvnb|Shilov|1977|loc=p. 109; Harvnb|Hefferon|2001|loc=p. 364; Harvnb|Beezer|2006|loc=Theorem EDELI on p. 469; and ] meaning, in particular, that in an "n"-dimensional space the linear transformation "A" cannot have more than "n" eigenvectors with different eigenvalues. [See Harvnb|Shilov|1977|loc=p. 109]

If a basis is defined in vector space, all vectors can be expressed in terms of components. For finite dimensional vector spaces with dimension "n", linear transformations can be represented with "n" × "n" square matrices. Conversely, every such square matrix corresponds to a linear transformation for a given basis. Thus, in a two-dimensional vector space "R"2 fitted with standard basis, the eigenvector equation for a linear transformation "A" can be written in the following matrix representation:

: egin{bmatrix} a_{11} & a_{12} \ a_{21} & a_{22} end{bmatrix} egin{bmatrix} x \ y end{bmatrix} = lambda egin{bmatrix} x \ y end{bmatrix},

where the juxtaposition of matrices means matrix multiplication.

Left and right eigenvectors

The word eigenvector formally refers to the right eigenvector x_R. It is defined by the above eigenvalue equation A x_R = lambda_R x_R, and is the most commonly used eigenvector. However, the left eigenvector x_L exists as well, and is defined by x_L A = lambda_L x_L.

Characteristic equation

When a transformation is represented by a square matrix "A", the eigenvalue equation can be expressed as: A mathbf{x} - lambda I mathbf{x} = mathbf{0}.This can be rearranged to: (A - lambda I) mathbf{x} = mathbf{0}.If there exists an inverse: (A - lambda I)^{-1} ,then both sides can be multiplied by the inverse to obtain the trivial solution: x = 0. Thus we require there to be no inverse by assuming from linear algebra that the determinant equals zero:: det(A - lambda I) = 0.

The determinant requirement is called the "characteristic equation" (less often, secular equation) of "A", and the left-hand side is called the "characteristic polynomial". When expanded, this gives a polynomial equation for lambda. The eigenvector x or its components are not present in the characteristic equation.

Example

The matrix

: egin{bmatrix} 2 & 1\1 & 2 end{bmatrix}

defines a linear transformation of the real plane. The eigenvalues of this transformation are given by the characteristic equation

: detegin{bmatrix} 2-lambda & 1\1 & 2-lambda end{bmatrix} = (2-lambda)^2 - 1 = 0.

The roots of this equation (i.e. the values of lambda for which the equation holds) are lambda=1 and lambda=3. Having found the eigenvalues, it is possible to find the eigenvectors. Considering first the eigenvalue lambda=3, we have

:egin{bmatrix} 2 & 1\1 & 2 end{bmatrix}egin{bmatrix}x\yend{bmatrix} = 3 egin{bmatrix}x\yend{bmatrix}.

Both rows of this matrix equation reduce to the single linear equation x=y. To find an eigenvector, we are free to choose any value for x, so by picking x=1 and setting y=x, we find the eigenvector to be

:egin{bmatrix}1\1end{bmatrix}.

We can check this is an eigenvector by checking that :egin{bmatrix}2&1\1&2end{bmatrix}egin{bmatrix}1\1end{bmatrix} = egin{bmatrix}3\3end{bmatrix}. For the eigenvalue lambda=1, a similar process leads to the equation x=-y, and hence the eigenvector is given by

:egin{bmatrix}1\-1end{bmatrix}.

The complexity of the problem for finding roots/eigenvalues of the characteristic polynomial increases rapidly with increasing the degree of the polynomial (the dimension of the vector space). There are exact solutions for dimensions below 5, but for higher dimensions there are generally no exact solutions and one has to resort to numerical methods to find them approximately. For large symmetric sparse matrices, Lanczos algorithm is used to compute eigenvalues and eigenvectors.

Existence and multiplicity of eigenvalues

For transformations on real vector spaces, the coefficients of the characteristic polynomial are all real. However, the roots are not necessarily real; they may well be complex numbers, or a mixture of real and complex numbers. For example, a matrix representing a planar rotation of 45 degrees will not leave any non-zero vector pointing in the same direction. Over a complex vector space, the fundamental theorem of algebra guarantees that the characteristic polynomial has at least one root, and thus the linear transformation has at least one eigenvalue.

As well as distinct roots, the characteristic equation may also have repeated roots. However, having repeated roots does not imply there are multiple distinct (i.e. linearly independent) eigenvectors with that eigenvalue. The "algebraic multiplicity" of an eigenvalue is defined as the multiplicity of the corresponding root of the characteristic polynomial. The geometric multiplicity of an eigenvalue is defined as the dimension of the associated eigenspace, i.e. number of linearly independent eigenvectors with that eigenvalue.

Over a complex space, the sum of the algebraic multiplicities will equal the dimension of the vector space, but the sum of the geometric multiplicities may be smaller. In a sense, then it is possible that there may not be sufficient eigenvectors to span the entire space. This is intimately related to the question of whether a given matrix may be diagonalized by a suitable choice of coordinates.

Shear

Shear in the plane is a transformation in which all points along a given line remain fixed while other points are shifted parallel to that line by a distance proportional to their perpendicular distance from the line. [Definition according to Weisstein, Eric W. [http://mathworld.wolfram.com/Shear.html Shear] From MathWorld − A Wolfram Web Resource] Shearing a plane figure does not change its area. Shear can be horizontal − along the "X" axis, or vertical − along the "Y" axis. In horizontal shear (see figure), a point "P" of the plane moves parallel to the "X" axis to the place "P' " so that its coordinate "y" does not change while the "x" coordinate increments to become "x' " = "x" + "k" "y", where "k" is called the shear factor.

The matrix of a horizontal shear transformation is egin{bmatrix}1 & k\ 0 & 1end{bmatrix} . The characteristic equation is λ2 − 2 λ + 1 = (1 − λ)2 = 0 which has a single, repeated root λ = 1. Therefore, the eigenvalue λ = 1 has algebraic multiplicity 2. The eigenvector(s) are found as solutions of: egin{bmatrix}1 - 1 & -k\ 0 & 1 - 1 end{bmatrix}egin{bmatrix}x\ yend{bmatrix} = egin{bmatrix}0 & -k\ 0 & 0 end{bmatrix}egin{bmatrix}x\ yend{bmatrix} = -ky = 0.The last equation is equivalent to "y" = 0, which is a straight line along the "x" axis. This line represents the one-dimensional eigenspace. In the case of shear the algebraic multiplicity of the eigenvalue (2) is greater than its geometric multiplicity (1, the dimension of the eigenspace). The eigenvector is a vector along the "x" axis. The case of vertical shear with transformation matrix egin{bmatrix}1 & 0\ k & 1end{bmatrix} is dealt with in a similar way; the eigenvector in vertical shear is along the "y" axis. Applying repeatedly the shear transformation changes the direction of any vector in the plane closer and closer to the direction of the eigenvector.

Uniform scaling and reflection

As a one-dimensional vector space, consider a rubber string tied to unmoving support in one end, such as that on a child's sling. Pulling the string away from the point of attachment stretches it and elongates it by some scaling factor λ which is a real number. Each vector on the string is stretched equally, with the same scaling factor λ, and although elongated, it preserves its original direction. For a two-dimensional vector space, consider a rubber sheet stretched equally in all directions such as a small area of the surface of an inflating balloon (Fig. 3). All vectors originating at the fixed point on the balloon surface (the origin) are stretched equally with the same scaling factor λ. This transformation in two-dimensions is described by the 2×2 square matrix:

: A mathbf{x} = egin{bmatrix}lambda & 0\0 & lambdaend{bmatrix} egin{bmatrix} x \ y end{bmatrix} = egin{bmatrix}lambda cdot x + 0 cdot y \0 cdot x + lambda cdot yend{bmatrix} = lambda egin{bmatrix} x \ y end{bmatrix} = lambda mathbf{x}.

Expressed in words, the transformation is equivalent to multiplying the length of "any" vector by λ while preserving its original direction. Since the vector taken was arbitrary, every non-zero vector in the vector space is an eigenvector. Whether the transformation is stretching (elongation, extension, inflation), or shrinking (compression, deflation) depends on the scaling factor: if λ > 1, it is stretching; if λ < 1, it is shrinking. Negative values of λ correspond to a reversal of direction, followed by a stretch or a shrink, depending on the absolute value of λ.

Unequal scaling

For a slightly more complicated example, consider a sheet that is stretched unequally in two perpendicular directions along the coordinate axes, or, similarly, stretched in one direction, and shrunk in the other direction. In this case, there are two different scaling factors: "k"1 for the scaling in direction "x", and "k"2 for the scaling in direction "y". The transformation matrix is egin{bmatrix}k_1 & 0\0 & k_2end{bmatrix}, and the characteristic equation is (k_1-lambda)(k_2-lambda) = 0. The eigenvalues, obtained as roots of this equation are λ1 = "k"1, and λ2 = "k"2 which means, as expected, that the two eigenvalues are the scaling factors in the two directions. Plugging "k"1 back in the eigenvalue equation gives one of the eigenvectors:

: egin{bmatrix}0 & 0\0 & k_2 - k_1end{bmatrix} egin{bmatrix} x \ yend{bmatrix} = egin{bmatrix}0\0end{bmatrix} or, more simply, y=0.Thus, the eigenspace is the "x"-axis. Similarly, substituting lambda=k_2 shows that the corresponding eigenspace is the "y"-axis. In this case, both eigenvalues have algebraic and geometric multiplicities equal to 1. If a given eigenvalue is greater than 1, the vectors are stretched in the direction of the corresponding eigenvector; if less than 1, they are shrunken in that direction. Negative eigenvalues correspond to reflections followed by a stretch or shrink. In general, matrices that are diagonalizable over the real numbers represent scalings and reflections: the eigenvalues represent the scaling factors (and appear as the diagonal terms), and the eigenvectors are the directions of the scalings.

The figure shows the case where k_1>1 and 1>k_2>0. The rubber sheet is stretched along the "x" axis and simultaneously shrunk along the "y" axis. After repeatedly applying this transformation of stretching/shrinking many times, almost any vector on the surface of the rubber sheet will be oriented closer and closer to the direction of the "x" axis (the direction of stretching). The exceptions are vectors along the "y"-axis, which will gradually shrink away to nothing.

Rotation

A rotation in a plane is a transformation that describes motion of a vector, plane, coordinates, etc., around a fixed point. Clearly, for rotations other than through 0° and 180°, every vector in the real plane will have its direction changed, and thus there cannot be any eigenvectors. But this is not necessarily true if we consider the same matrix over a complex vector space.

A counterclockwise rotation in the horizontal plane about the origin at an angle φ is represented by the matrix

: mathbf{R} = egin{bmatrix} cos varphi & -sin varphi \ sin varphi & cos varphi end{bmatrix}.

The characteristic equation of R is λ2 − 2λ cos φ + 1 = 0. This quadratic equation has a discriminant "D" = 4 (cos2 φ − 1) = − 4 sin2 φ which is a negative number whenever φ is not equal a multiple of 180°. A rotation of 0°, 360°, … is just the identity transformation, (a uniform scaling by +1) while a rotation of 180°, 540°, …, is a reflection (uniform scaling by -1). Otherwise, as expected, there are no real eigenvalues or eigenvectors for rotation in the plane.

Rotation matrices on complex vector spaces

The characteristic equation has two complex roots λ1 and λ2. If we choose to think of the rotation matrix as a linear operator on the complex two dimensional, we can consider these complex eigenvalues. The roots are complex conjugates of each other: λ1,2 = cos φ ± "i" sin φ = "e" ± "i"φ, each with an algebraic multiplicity equal to 1, where "i" is the imaginary unit.

The first eigenvector is found by substituting the first eigenvalue, λ1, back in the eigenvalue equation:

: egin{bmatrix} cos varphi - lambda_1 & -sin varphi \ sin varphi & cos varphi - lambda_1 end{bmatrix} egin{bmatrix} x \ y end{bmatrix} = egin{bmatrix} - i sin varphi & -sin varphi \ sin varphi & - i sin varphi end{bmatrix} egin{bmatrix} x \ y end{bmatrix} = egin{bmatrix} 0 \ 0 end{bmatrix}.

The last equation is equivalent to the single equation x=iy, and again we are free to set x=1 to give the eigenvector

:egin{bmatrix}1\-iend{bmatrix}.

Similarly, substituting in the second eigenvalue gives the single equation x=-iy and so the eigenvector is given by

:egin{bmatrix}1\iend{bmatrix}.

Although not diagonalizable over the reals, the rotation matrix is diagonalizable over the complex numbers, and again the eigenvalues appear on the diagonal. Thus rotation matrices acting on complex spaces can be thought of as scaling matrices, with complex scaling factors.

Infinite-dimensional spaces and spectral theory

If the vector space is an infinite dimensional Banach space, the notion of eigenvalues can be generalized to the concept of spectrum. The spectrum is the set of scalars λ for which ("T" − λ)−1 is not defined; that is, such that "T" − λ has no bounded inverse.

Clearly if λ is an eigenvalue of "T", λ is in the spectrum of "T". In general, the converse is not true. There are operators on Hilbert or Banach spaces which have no eigenvectors at all. This can be seen in the following example. The bilateral shift on the Hilbert space ""&thinsp;2(Z) (that is, the space of all sequences of scalars … "a"−1, "a"0, "a"1, "a"2, … such that

: cdots + |a_{-1}|^2 + |a_0|^2 + |a_1|^2 + |a_2|^2 + cdots

converges) has no eigenvalue but does have spectral values.

In infinite-dimensional spaces, the spectrum of a bounded operator is always nonempty. This is also true for an unbounded self adjoint operator. Via its spectral measures, the spectrum of any self adjoint operator, bounded or otherwise, can be decomposed into absolutely continuous, pure point, and singular parts. (See Decomposition of spectrum.)

The hydrogen atom is an example where both types of spectra appear. The eigenfunctions of the hydrogen atom Hamiltonian are called eigenstates and are grouped into two categories. The bound states of the hydrogen atom correspond to the discrete part of the spectrum (they have a discrete set of eigenvalues which can be computed by Rydberg formula) while the ionization processes are described by the continuous part (the energy of the collision/ionization is not quantified).

Eigenfunctions

A common example of such maps on infinite dimensional spaces are the action of differential operators on function spaces. As an example, on the space of infinitely differentiable functions, the process of differentiation defines a linear operator since

: displaystylefrac{d}{dt}(af+bg) = a frac{df}{dt} + b frac{dg}{dt},

where "f"("t") and "g"("t") are differentiable functions, and "a" and "b" are constants).

The eigenvalue equation for linear differential operators is then a set of one or more differential equations. The eigenvectors are commonly called eigenfunctions. The most simple case is the eigenvalue equation for differentiation of a real valued function by a single real variable. In this case, the eigenvalue equation becomes the linear differential equation

: displaystylefrac{d}{dx} f(x) = lambda f(x).

Here "λ" is the eigenvalue associated with the function, "f(x)". This eigenvalue equation has a solution for all values of "λ". If "λ" is zero, the solution is

: f(x) = A,,

where "A" is any constant; if "λ" is non-zero, the solution is the exponential function

: f(x) = Ae^{lambda x}.

If we expand our horizons to complex valued functions, the value of "λ" can be any complex number. The spectrum of "d/dt" is therefore the whole complex plane. This is an example of a continuous spectrum.

Waves on a string

The displacement, h(x,t), of a stressed rope fixed at both ends, like the vibrating strings of a string instrument, satisfies the wave equation

: frac{partial^2 h}{partial t^2} = c^2frac{partial^2 h}{partial x^2},

which is a linear partial differential equation, where "c" is the constant wave speed. The normal method of solving such an equation is separation of variables. If we assume that "h" can be written as the product of the form "X(x)T(t)", we can form a pair of ordinary differential equations:

:X"=-frac{omega^2}{c^2}X and T"=-omega^2 T.

Each of these is an eigenvalue equation (the unfamiliar form of the eigenvalue is chosen merely for convenience). For any values of the eigenvalues, the eigenfunctions are given by

:X = sin(frac{omega x}{c} + phi) and T = sin(omega t + psi).

If we impose boundary conditions -- that the ends of the string are fixed with "X(x)=0" at "x=0" and "x=L", for example -- we can constrain the eigenvalues. For those boundary conditions, we find

:sin(phi) = 0, and so the phase angle phi=0

and

:sinleft(frac{omega L}{c} ight) = 0.

Thus, the constant omega is constrained to take one of the values omega_n = frac{ncpi}{L}, where "n" is any integer. Thus the clamped string supports a family of standing waves of the form

:h(x,t) = sin(npi x/L)sin(omega_n t).

From the point of view of our musical instrument, the frequency omega_n is the frequency of the "n"th harmonic overtone.

Eigendecomposition

The spectral theorem for matrices can be stated as follows. Let A be a square "n" × "n" matrix. Let q1 ... q"k" be an eigenvector basis, i.e. an indexed set of "k" linearly independent eigenvectors, where "k" is the dimension of the space spanned by the eigenvectors of A. If "k" = "n", then A can be written

: mathbf{A}=mathbf{Q}mathbf{Lambda}mathbf{Q}^{-1}

where Q is the square "n" × "n" matrix whose "i"-th column is the basis eigenvector q"i" of A and Λ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, i.e. Λ"ii" = λ"i".

Applications

chrödinger equation

An example of an eigenvalue equation where the transformation "T" is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics:

: Hpsi_E = Epsi_E ,

where "H", the Hamiltonian, is a second-order differential operator and psi_E, the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue "E", interpreted as its energy.

However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for psi_E within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which psi_E and "H" can be represented as a one-dimensional array and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form. (Fig. 8 presents the lowest eigenfunctions of the Hydrogen atom Hamiltonian.)

The Dirac notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by |Psi_E angle. In this notation, the Schrödinger equation is:

: H|Psi_E angle = E|Psi_E angle

where |Psi_E angle is an eigenstate of "H". It is a self adjoint operator, the infinite dimensional analog of Hermitian matrices ("see Observable"). As in the matrix case, in the equation above H|Psi_E angle is understood to be the vector obtained by application of the transformation "H" to |Psi_E angle.

Molecular orbitals

In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree-Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. If one wants to underline this aspect one speaks of nonlinear eigenvalue problem. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree-Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations.

Geology and glaciology

In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can only be compared graphically such as in a Tri-Plot (Sneed and Folk) diagram [Graham, D., and Midgley, N., 2000. Earth Surface Processes and Landforms (25) pp 1473-1477] , [Sneed ED, Folk RL. 1958. Pebbles in the lower Colorado River, Texas, a study of particle morphogenesis. Journal of Geology 66(2): 114–150] , or as a Stereonet on a Wulff Net [ [http://dx.doi.org/10.1016/S0098-3004(97)00122-2 GIS-stereoplot: an interactive stereonet plotting module for ArcView 3.0 geographic information system] ] . The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. Eigenvectors output from programs such as Stereo32 [ [http://www.ruhr-uni-bochum.de/hardrock/downloads.htm Stereo32] ] are in the order "E"1 &ge; "E"2 &ge; "E"3, with "E"1 being the primary orientation of clast orientation/dip, "E"2 being the secondary and "E"3 being the tertiary, in terms of strength. The clast orientation is defined as the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of "E"1, "E"2, and "E"3 are dictated by the nature of the sediment's fabric. If "E"1 = "E"2 = "E"3, the fabric is said to be isotropic. If "E"1 = "E"2 > "E"3 the fabric is planar. If "E"1 > "E"2 > "E"3 the fabric is linear. See 'A Practical Guide to the Study of Glacial Sediments' by Benn & Evans, 2004 [Benn, D., Evans, D., 2004. A Practical Guide to the study of Glacial Sediments. London: Arnold. pp 103-107] .

Factor analysis

In factor analysis, the eigenvectors of a covariance matrix or correlation matrix correspond to factors, and eigenvalues to the variance explained by these factors. Factor analysis is a statistical technique used in the social sciences and in marketing, product management, operations research, and other applied sciences that deal with large quantities of data. The objective is to explain most of the covariability among a number of observable random variables in terms of a smaller number of unobservable latent variables called factors. The observable random variables are modeled as linear combinations of the factors, plus unique variance terms. Eigenvalues are used in analysis used by Q-methodology software; factors with eigenvalues greater than 1.00 are considered significant, explaining an important amount of the variability in the data, while eigenvalues less than 1.00 are considered too weak, not explaining a significant portion of the data variability.

Vibration analysis

Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are used to determine the natural frequencies of vibration, and the eigenvectors determine the shapes of these vibrational modes. The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis.

Eigenfaces

In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. [Citation
last=Xirouhakis
first=A.
first2=G.
last2=Votsis
first3=A.
last3=Delopoulus
title=Estimation of 3D motion and structure of human faces
publisher=Online paper in PDF format, National Technical University of Athens
url=http://www.image.ece.ntua.gr/papers/43.pdf
year=2004
] The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated to a large set of normalized pictures of faces are called eigenfaces; this is an example of principal components analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made. More on determining sign language letters using eigen systems can be found here: http://www.geigel.com/signlanguage/index.php

Similar to this concept, eigenvoices concept is also developed which represents the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems, for speaker adaptation.

Tensor of inertia

In mechanics, the eigenvectors of the inertia tensor define the principal axes of a rigid body. The tensor of inertia is a key quantity required in order to determine the rotation of a rigid body around its center of mass.

Stress tensor

In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components.

Eigenvalues of a graph

In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix "A", or (increasingly) of the graph's Laplacian matrix, which is either "T"−"A" or "I"−"T" 1/2"AT" −1/2, where "T" is a diagonal matrix holding the degree of each vertex, and in "T" −1/2, 0 is substituted for 0−1/2. The "k"th principal eigenvector of a graph is defined as either the eigenvector corresponding to the "k"th largest eigenvalue of "A", or the eigenvector corresponding to the "k"th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.

The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second principal eigenvector can be used to partition the graph into clusters, via spectral clustering. Other methods are also available for clustering.

See also

* Nonlinear eigenproblem
* Quadratic eigenvalue problem
* Eigenspectrum

Notes

References

* Citation
last=Korn
first=Granino A.
first2=Theresa M.
last2=Korn
title=Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review
publisher=1152 p., Dover Publications, 2 Revised edition
year=2000
isbn=0-486-41147-8
.
* Citation

last = Lipschutz
first = Seymour
title = Schaum's outline of theory and problems of linear algebra
edition = 2nd
publisher = McGraw-Hill Companies
location = New York, NY
series = Schaum's outline series
year = 1991
isbn = 0-07-038007-4
.
* Citation

last = Friedberg
first = Stephen H.
first2 = Arnold J.
last2 = Insel
first3 = Lawrence E.
last3 = Spence
title = Linear algebra
edition = 2nd
publisher = Prentice Hall
location = Englewood Cliffs, NJ 07632
year = 1989
isbn = 0-13-537102-3
.
* Citation

last = Aldrich
first = John
title = Earliest Known Uses of Some of the Words of Mathematics
url = http://members.aol.com/jeff570/e.html
editor = Jeff Miller (Editor)
year = 2006
chapter = Eigenvalue, eigenfunction, eigenvector, and related terms
chapterurl = http://members.aol.com/jeff570/e.html
accessdate = 2006-08-22

* Citation

last=Strang
first=Gilbert
title=Introduction to linear algebra
publisher=Wellesley-Cambridge Press, Wellesley, MA
year=1993
isbn=0-961-40885-5
.
* Citation

last=Strang
first=Gilbert
title=Linear algebra and its applications
publisher=Thomson, Brooks/Cole, Belmont, CA
year=2006
isbn=0-030-10567-6
.
* Citation

last=Bowen
first=Ray M.
first2=Chao-Cheng
last2=Wang
title=Linear and multilinear algebra
publisher=Plenum Press, New York, NY
year=1980
isbn=0-306-37508-7
.
* Citation

last = Cohen-Tannoudji
first = Claude
author-link = Claude Cohen-Tannoudji
title = Quantum mechanics
publisher = John Wiley & Sons
year = 1977
chapter = Chapter II. The mathematical tools of quantum mechanics
isbn = 0-471-16432-1
.
* Citation

last = Fraleigh
first = John B.
first2 = Raymond A.
last2 = Beauregard
title = Linear algebra
edition = 3rd
publisher = Addison-Wesley Publishing Company
year = 1995
isbn = 0-201-83999-7 (international edition)
.
* Citation

last=Golub
first=Gene H.
first2=Charles F.
last2=van Loan
title=Matrix computations (3rd Edition)
publisher=Johns Hopkins University Press, Baltimore, MD
year=1996
isbn=978-0-8018-5414-9
.
* Citation

last = Hawkins
first = T.
title = Cauchy and the spectral theory of matrices
journal = Historia Mathematica
volume = 2
pages = 1-29
date = 1975
.
* Citation

last=Horn
first=Roger A.
first2=Charles F.
last2=Johnson
title=Matrix analysis
publisher=Cambridge University Press
year=1985
isbn=0-521-30586-1 (hardback), ISBN 0-521-38632-2 (paperback)
.
* Citation

last=Kline
first=Morris
title=Mathematical thought from ancient to modern times
publisher=Oxford University Press
year=1972
isbn=0-195-01496-0
.
* Citation

last=Meyer
first=Carl D.
title=Matrix analysis and applied linear algebra
publisher=Society for Industrial and Applied Mathematics (SIAM), Philadelphia
year=2000
isbn=978-0-89871-454-8
.
* Citation

last=Brown
first=Maureen
title=Illuminating Patterns of Perception: An Overview of Q Methodology
date=October 2004
isbn=
.
* Citation

last = Golub
first = Gene F.
first2 = Henk A.
last2 = van der Vorst
title = Eigenvalue computation in the 20th century
journal = Journal of Computational and Applied Mathematics
volume = 123
pages = 35-65
date = 2000
.
* Citation

last=Akivis
first=Max A.
coauthors=Vladislav V. Goldberg
title=Tensor calculus
series=Russian
publisher=Science Publishers, Moscow
year=1969
.
* Citation

last=Gelfand
first=I. M.
title=Lecture notes in linear algebra
series=Russian
publisher=Science Publishers, Moscow
year=1971
isbn=
.
* Citation

last=Alexandrov
first=Pavel S.
title=Lecture notes in analytical geometry
series=Russian
publisher=Science Publishers, Moscow
year=1968
isbn=
.
* Citation

last=Carter
first=Tamara A.
first2=Richard A.
last2=Tapia
first3=Anne
last3=Papaconstantinou
title=Linear Algebra: An Introduction to Linear Algebra for Pre-Calculus Students
publisher=Rice University, Online Edition
url=http://ceee.rice.edu/Books/LA/index.html
accessdate=2008-02-19
.
* Citation

last=Roman
first=Steven
title=Advanced linear algebra
edition=3rd
publisher=Springer Science + Business Media, LLC
place=New York, NY
year=2008
isbn=978-0-387-72828-5
.
* Citation

last=Shilov
first=Georgi E.
title=Linear algebra
edition=translated and edited by Richard A. Silverman
publisher=Dover Publications
place=New York
year=1977
isbn=0-486-63518-X
.
* Citation

last=Hefferon
first=Jim
title=Linear Algebra
publisher=Online book, St Michael's College, Colchester, Vermont, USA
url=http://joshua.smcvt.edu/linearalgebra/
year=2001
isbn=
.
* Citation

last=Kuttler
first=Kenneth
title=An introduction to linear algebra
publisher=Online e-book in PDF format, Brigham Young University
url=http://www.math.byu.edu/~klkuttle/Linearalgebra.pdf
year=2007
isbn=
.
* Citation

last=Demmel
first=James W.
title=Applied numerical linear algebra
publisher=SIAM
year=1997
isbn=0-89871-389-7
.
* Citation

last=Beezer
first=Robert A.
title=A first course in linear algebra
url=http://linear.ups.edu/
publisher=Free online book under GNU licence, University of Puget Sound
year=2006
isbn=
.
* Citation

last = Lancaster
first = P.
title = Matrix theory
series = Russian
publisher = Science Publishers
location = Moscow, Russia
year = 1973
.
* Citation

last = Halmos
first = Paul R.
author-link = Paul Halmos
title = Finite-dimensional vector spaces
edition = 8th
publisher = Springer-Verlag
location = New York, NY
year = 1987
isbn = 0387900934
.
* Pigolkina, T. S. and Shulman, V. S., "Eigenvalue" (in Russian), In:Vinogradov, I. M. (Ed.), "Mathematical Encyclopedia", Vol. 5, Soviet Encyclopedia, Moscow, 1977.
* Pigolkina, T. S. and Shulman, V. S., "Eigenvector" (in Russian), In:Vinogradov, I. M. (Ed.), "Mathematical Encyclopedia", Vol. 5, Soviet Encyclopedia, Moscow, 1977.
* Citation

last=Greub
first=Werner H.
title=Linear Algebra (4th Edition)
publisher=Springer-Verlag, New York, NY
year=1975
isbn=0-387-90110-8
.
* Citation

last=Larson
first=Ron
first2=Bruce H.
last2=Edwards
title=Elementary linear algebra
edition=5th
publisher=Houghton Mifflin Company
year=2003
isbn=0-618-33567-6
.
* Curtis, Charles W., "Linear Algebra: An Introductory Approach", 347 p., Springer; 4th ed. 1984. Corr. 7th printing edition (August 19, 1999), ISBN 0387909923.
* Citation

last=Shores
first=Thomas S.
title=Applied linear algebra and matrix analysis
publisher=Springer Science+Business Media, LLC
year=2007
isbn=0-387-33194-8
.
* Citation

last=Sharipov
first=Ruslan A.
title=Course of Linear Algebra and Multidimensional Geometry: the textbook
publisher=Online e-book in various formats on arxiv.org, Bashkir State University, Ufa
url=http://www.geocities.com/r-sharipov
year=1996
isbn=5-7477-0099-5
id=arxiv|math|0405323v1
.
* Citation

last=Gohberg
first=Israel
first2=Peter
last2=Lancaster
first3=Leiba
last3=Rodman
title=Indefinite linear algebra and applications
publisher=Birkhäuser Verlag
place=Basel-Boston-Berlin
year=2005
isbn=3-7643-7349-0
.

External links

* [http://web.mit.edu/18.06/www/Demos/eigen-applet-all/eigen_sound_all.html Eigen Vector Examination (Demo) with Sound]
* [http://video.google.com/videoplay?docid=-8791056722738431468&hl=en MIT Video Lecture on Eigenvalues and Eigenvectors] , from MIT OpenCourseWare
* [http://www.caam.rice.edu/software/ARPACK/ ARPACK] is a collection of FORTRAN subroutines for solving large scale (sparse) eigenproblems.
* [http://www.math.uri.edu/~jbaglama/ IRBLEIGS] , has MATLAB code with similar capabilities to ARPACK. (See [http://www.math.uri.edu/~jbaglama/papers/paper10.pdf this paper] for a comparison between IRBLEIGS and ARPACK.)
* [http://netlib.org/lapack/ LAPACK] is a collection of FORTRAN subroutines for solving dense linear algebra problems
* [http://www.alglib.net/eigen/ ALGLIB] includes a partial port of the LAPACK to C++, C#, Delphi, etc.
*

* [http://mathworld.wolfram.com/Eigenvector.html MathWorld: Eigenvector]
* [http://www.arndt-bruenner.de/mathe/scripts/engl_eigenwert.htm Online calculator for Eigenvalues and Eigenvectors]
* [http://www.bluebit.gr/matrix-calculator/ Online Matrix Calculator] Calculates eigenvalues, eigenvectors and other decompositions of matrices online
* [http://www.vrand.com Vanderplaats Research and Development] - Provides the [http://www.vrand.com SMS] eigenvalue solver for Structural Finite Element. The solver is in the [http://www.vrand.com/Genesis.html "GENESIS"] program as well as other commercial programs. SMS can be easily use with MSC.Nastran or NX/Nastran via DMAPs.
* [http://www.physlink.com/education/AskExperts/ae520.cfm What are Eigen Values?] from PhysLink.com's "Ask the Experts"
* [http://www.cs.utk.edu/~dongarra/etemplates/index.html Templates for the Solution of Algebraic Eigenvalue Problems] Edited by Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst (a guide to the numerical solution of eigenvalue problems)


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Eigenvalue algorithm — In linear algebra, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors. Contents 1 Characteristic polynomial 2 Power… …   Wikipedia

  • eigenvector — noun a) A vector that is not rotated under a given linear transformation; a left or right eigenvector depending on context. b) A right eigenvector; a nonzero vector such that, for a particular matrix , for some scalar which is its eigenvalue and… …   Wiktionary

  • Eigenvalues and eigenvectors — For more specific information regarding the eigenvalues and eigenvectors of matrices, see Eigendecomposition of a matrix. In this shear mapping the red arrow changes direction but the blue arrow does not. Therefore the blue arrow is an… …   Wikipedia

  • Eigendecomposition of a matrix — In the mathematical discipline of linear algebra, eigendecomposition or sometimes spectral decomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and… …   Wikipedia

  • QR algorithm — In numerical linear algebra, the QR algorithm is an eigenvalue algorithm; that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR transformation was developed in 1961 by John G.F. Francis (England) and by Vera N.… …   Wikipedia

  • Liste deutscher Wörter im Englischen — Dies ist eine Liste deutscher Wörter, die ins Englische entlehnt wurden (z. B. Hamburger). In den meisten Fällen hat sich die ursprüngliche Bedeutung des entlehnten deutschen Wortes gewandelt. Die deutsche und englische Sprache entstammen… …   Deutsch Wikipedia

  • Theorems and definitions in linear algebra — This article collects the main theorems and definitions in linear algebra. Vector spaces A vector space( or linear space) V over a number field² F consists of a set on which two operations (called addition and scalar multiplication, respectively) …   Wikipedia

  • Spectral graph theory — In mathematics, spectral graph theory is the study of properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of its adjacency matrix or Laplacian matrix. An undirected graph has a symmetric adjacency …   Wikipedia

  • Orientation Tensor — / Eigenvectors Eigenvalues In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric s constituents orientation and dip can be summarized in a 3 D space …   Wikipedia

  • Eigenfunction — [ vibrating drum problem is, at any point in time, an eigenfunction of the Laplace s equation on a disk.] In mathematics, an eigenfunction of a linear operator, A , defined on some function space is any non zero function f in that space that… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”