Eigenvalue perturbation

Eigenvalue perturbation

Eigenvalue perturbation is a perturbation approach to finding eigenvalues and eigenvectors of systems perturbed from one with known eigenvectors and eigenvalues. It also allows one to determine the sensitivity of the eigenvalues and eigenvectors with respect to changes in the system.

Example

Suppose we have solutions to the generalized eigenvalue problem,: [K_0] mathbf{x}_{0i} = lambda_{0i} [M_0] mathbf{x}_{0i}. qquad (1)That is, we know lambda_{0i} and mathbf{x}_{0i} for i=1,dots,N.Now suppose we want to change the matrices by a small amount. That is, we want to let: [K] = [K_0] + [delta K] and: [M] = [M_0] + [delta M] where all of the delta terms are much smaller than the corresponding term. We expect answers to be of the form:lambda_i = lambda_{0i}+deltalambda_{0i}and:mathbf{x}_i = mathbf{x}_{0i}+deltamathbf{x}_{0i}.

teps

We assume that the matrices are symmetric and positive definite and assume we have scaled the eigenvectors such that:mathbf{x}_{0j}^T [M_0] mathbf{x}_{0i} = delta_i^j qquad(2)where delta_i^j is the Kronecker delta.

Now we want to solve the equation: [K] mathbf{x}_i = lambda_i [M] mathbf{x}_i.Substituting, we get:( [K_0] + [delta K] )(mathbf{x}_{0i} + delta mathbf{x}_i) = (lambda_{0i}+deltalambda_{i})( [M_0] + [delta M] )(mathbf{x}_{0i}+deltamathbf{x}_{0i}).which expands to: [K_0] mathbf{x}_{0i}+ [delta K] mathbf{x}_{0i} + [K_0] delta mathbf{x}_i + [delta K] delta mathbf{x}_i :::= lambda_{0i} [M_0] mathbf{x}_{0i}+ lambda_{0i} [M_0] deltamathbf{x}_i + lambda_{0i} [delta M] mathbf{x}_{0i} + deltalambda_i [M_0] mathbf{x}_{0i} ::::{} + lambda_{0i} [delta M] deltamathbf{x}_i + deltalambda_i [delta M] mathbf{x}_{0i} + deltalambda_i [M_0] deltamathbf{x}_i + deltalambda_i [delta M] deltamathbf{x}_i .Canceling from (1) leaves: [delta K] mathbf{x}_{0i} + [K_0] delta mathbf{x}_i + [delta K] delta mathbf{x}_i :::= lambda_{0i} [M_0] deltamathbf{x}_i + lambda_{0i} [delta M] mathbf{x}_{0i} + deltalambda_i [M_0] mathbf{x}_{0i} ::::{} + lambda_{0i} [delta M] deltamathbf{x}_i + deltalambda_i [delta M] mathbf{x}_{0i} + deltalambda_i [M_0] deltamathbf{x}_i + deltalambda_i [delta M] deltamathbf{x}_i .Removing the higher-order terms, this simplifies to: [K_0] deltamathbf{x}_i+ [delta K] mathbf{x}_{0i} = lambda_{0i} [M_0] delta mathbf{x}_i + lambda_{0i} [delta M] mathrm{x}_{0i} + delta lambda_i [M_0] mathbf{x}_{0i}. qquad(3)

We note that, when the matrix is symmetric, the unperturbed eigenvectors are orthogonal and so we use them as a basis for the perturbed eigenvectors. That is, we want to construct

:delta mathbf{x}_i = sum_{j=1}^N epsilon_{ij} mathbf{x}_{0j} qquad(4)

where the epsilon_{ij} are small constants that are to be determined. Substituting (4) into (3) and rearranging gives: [K_0] sum_{j=1}^N epsilon_{ij} mathbf{x}_{0j} + [delta K] mathbf{x}_{0i} = lambda_{0i} [M_0] sum_{j=1}^N epsilon_{ij} mathbf{x}_{0j} + lambda_{0i} [delta M] mathbf{x}_{0i} + deltalambda_i [M_0] mathbf{x}_{0i} qquad (5).

Or::sum_{j=1}^N epsilon_{ij} [K_0] mathbf{x}_{0j} + [delta K] mathbf{x}_{0i} = lambda_{0i} [M_0] sum_{j=1}^N epsilon_{ij} mathbf{x}_{0j} + lambda_{0i} [delta M] mathbf{x}_{0i} + deltalambda_i [M_0] mathbf{x}_{0i}.

By equation (1)::sum_{j=1}^N epsilon_{ij} lambda_{0j} [M_0] mathbf{x}_{0j} + [delta K] mathbf{x}_{0i} = lambda_{0i} [M_0] sum_{j=1}^N epsilon_{ij} mathbf{x}_{0j} + lambda_{0i} [delta M] mathbf{x}_{0i} + deltalambda_i [M_0] mathbf{x}_{0i}.

Because the eigenvectors are orthogonal, we can remove the summations by left multiplying by mathbf{x}_{0i}^T:

:mathbf{x}_{0i}^T epsilon_{ii} lambda_{0i} [M_0] mathbf{x}_{0i} + mathbf{x}_{0i}^T [delta K] mathbf{x}_{0i} = lambda_{0i} mathbf{x}_{0i}^T [M_0] epsilon_{ii} mathbf{x}_{0i} + lambda_{0i}mathbf{x}_{0i}^T [delta M] mathbf{x}_{0i} + deltalambda_imathbf{x}_{0i}^T [M_0] mathbf{x}_{0i} .

By use of equation (1) again:

:mathbf{x}_{0i}^T [K_0] epsilon_{ii} mathbf{x}_{0i} + mathbf{x}_{0i}^T [delta K] mathbf{x}_{0i} = lambda_{0i} mathbf{x}_{0i}^T [M_0] epsilon_{ii} mathbf{x}_{0i} + lambda_{0i}mathbf{x}_{0i}^T [delta M] mathbf{x}_{0i} + deltalambda_imathbf{x}_{0i}^T [M_0] mathbf{x}_{0i} ~~(6) .

The two terms containing epsilon_{ii} are equal because left-multiplying (1) by mathbf{x}_{0i} ^T gives

:mathbf{x}_{0i}^T [K_0] mathbf{x}_{0i} = lambda_{0i}mathbf{x}_{0i}^T [M_0] mathbf{x}_{0i}.

Canceling those terms in (6) leaves

:mathbf{x}_{0i}^T [delta K] mathbf{x}_{0i} = lambda_{0i} mathbf{x}_{0i}^T [delta M] mathbf{x}_{0i} + deltalambda_i mathbf{x}_{0i}^T [M_0] mathbf{x}_{0i}.

Rearranging gives

:deltalambda_i = frac{mathbf{x}^T_{0i}( [delta K] - lambda_{0i} [delta M] )mathbf{x}_{0i{mathbf{x}_{0i}^T [M_0] mathbf{x}_{0i

But by (2), this denominator is equal to 1. Thus

:deltalambda_i = mathbf{x}^T_{0i}( [delta K] - lambda_{0i} [delta M] )mathbf{x}_{0i}

Then, by left multiplying equation (6) by mathbf{x}_{0k} (for i eq k):

:epsilon_{ik} = frac{mathbf{x}^T_{0i}( [delta K] - lambda_{0k} [delta M] )mathbf{x}_{0k{lambda_{0k}-lambda_{0i, qquad i eq k.

Or by changing the name of the indices:

:epsilon_{ij} = frac{mathbf{x}^T_{0i}( [delta K] - lambda_{0j} [delta M] )mathbf{x}_{0j{lambda_{0j}-lambda_{0i, qquad i eq j.

To find epsilon_{ii}, use

:mathbf{x}^T_i [M] mathbf{x}_i = 1 Rightarrow epsilon_{ii}=-frac{1}{2}mathbf{x}^T_{0i} [delta M] mathbf{x}_{0i}.

Summary

:lambda_i = lambda_{0i} + mathbf{x}^T_{0i} ( [delta K] - lambda_{0i} [delta M] ) mathbf{x}_{0i}and:mathbf{x}_i = mathbf{x}_{0i}(1 - frac{1}{2} mathbf{x}^T_{0i} [delta M] mathbf{x}_{0i}) + sum_{j=1atop j eq i}^N frac{mathbf{x}^T_{0j}( [delta K] - lambda_{0i} [delta M] )mathbf{x}_{0i{lambda_{0i}-lambda_{0jmathbf{x}_{0j}

Results

This means it is possible to efficiently do a sensitivity analysis on lambda_i as a function of changes in the entries of the matrices. (Recall that the matrices are symmetric and so changing K_{(kell)} will also change K_{(ell k)}, hence the (2-delta_k^ell) term.):frac{partial lambda_i}{partial K_{(kell) = frac{partial}{partial K_{(kell)left(lambda_{0i} + mathbf{x}^T_{0i} ( [delta K] - lambda_{0i} [delta M] ) mathbf{x}_{0i} ight) = x_{0i(k)} x_{0i(ell)} (2-delta_k^ell)and:frac{partial lambda_i}{partial M_{(kell) = frac{partial}{partial M_{(kell)left(lambda_{0i} + mathbf{x}^T_{0i} ( [delta K] - lambda_{0i} [delta M] ) mathbf{x}_{0i} ight) =lambda_i x_{0i(k)} x_{0i(ell)}(2-delta_k^ell).Similarly:frac{partialmathbf{x}_i}{partial K_{(kell) = sum_{j=1atop j eq i}^N frac{x_{0j(k)} x_{0i(ell)}(2-delta_k^ell)}{lambda_{0i}-lambda_{0jmathbf{x}_{0j}and:frac{partial mathbf{x}_i}{partial M_{(kell) = -mathbf{x}_{0i}frac{x_{0i(k)}x_{0i(ell){2}(2-delta_k^ell) - sum_{j=1atop j eq i}^N frac{lambda_{0i}x_{0j(k)} x_{0i(ell){lambda_{0i}-lambda_{0jmathbf{x}_{0j}(2-delta_k^ell).


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Perturbation theory — This article describes perturbation theory as a general mathematical method. For perturbation theory as applied to quantum mechanics, see perturbation theory (quantum mechanics). Perturbation theory comprises mathematical methods that are used to …   Wikipedia

  • Jacobi eigenvalue algorithm — The Jacobi eigenvalue algorithm is a numerical procedure for the calculation of all eigenvalues and eigenvectors of a real symmetric matrix. Description Let varphi in mathbb{R}, , 1 le k < l le n and let J(varphi, k, l) denote the n imes n matrix …   Wikipedia

  • List of mathematics articles (E) — NOTOC E E₇ E (mathematical constant) E function E₈ lattice E₈ manifold E∞ operad E7½ E8 investigation tool Earley parser Early stopping Earnshaw s theorem Earth mover s distance East Journal on Approximations Eastern Arabic numerals Easton s… …   Wikipedia

  • Eigengap — In linear algebra, the eigengap of a linear operator is the difference between two successive eigenvalues, where eigenvalues are sorted in ascending order. The Davis–Kahan theorem, named after Chandler Davis and William Kahan, uses the eigengap… …   Wikipedia

  • Addition — is the mathematical process of putting things together. The plus sign + means that two numbers are added together. For example, in the picture on the right, there are 3 + 2 apples meaning three apples and two other apples which is the same as… …   Wikipedia

  • Orr–Sommerfeld equation — The Orr–Sommerfeld equation, in fluid dynamics, is an eigenvalue equation describing the linear two dimensional modes of disturbance to a viscous parallel flow. The solution to the Navier–Stokes equations for a parallel, laminar flow can become… …   Wikipedia

  • Compact operator on Hilbert space — In functional analysis, compact operators on Hilbert spaces are a direct extension of matrices: in the Hilbert spaces, they are precisely the closure of finite rank operators in the uniform operator topology. As such, results from matrix theory… …   Wikipedia

  • Bauer-Fike theorem — In mathematics, the Bauer Fike theorem is a standard result in the perturbation theory of the eigenvalue of a complex valued diagonalizable matrix. In its substance, it states an absolute upper bound for the deviation of one perturbed matrix… …   Wikipedia

  • Calculus of variations — is a field of mathematics that deals with extremizing functionals, as opposed to ordinary calculus which deals with functions. A functional is usually a mapping from a set of functions to the real numbers. Functionals are often formed as definite …   Wikipedia

  • Diagonalizable matrix — In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1AP is a diagonal matrix. If V is a finite dimensional vector space, then a linear …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”