Matrix differential equation

Matrix differential equation

A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and of its derivatives of various orders. A matrix differential equation is one containing more than one function stacked into vector form with a matrix relating the functions to their derivatives.

For example, a simple matrix ordinary differential equation is

x'(t) = Ax(t),

where x(t) is an n×1 vector of functions of an underlying variable t, x'(t) is the vector of first derivatives of these functions, and A is an n×n matrix.

Contents

Stability and steady state of the matrix system

The matrix equation x'(t) = Ax(t) + b with n×1 parameter vector b is stable if and only if all eigenvalues of the matrix A have a negative real part. The steady state x* to which it converges if stable is found by setting x'(t)=0, yielding x * = − A − 1b, assuming A is invertible. Thus the original equation can be written in homogeneous form in terms of deviations from the steady state: x'(t) = A[x(t) − x * ]. A different way of expressing this (closer to regular usage) is that x* is a particular solution to the in-homogenous equation, and all solutions will be on the form xh + x * , with xh a solution to the homogenous equation (b=0).

Solution in matrix form

Matrix exponentials can be used to express the solution of x'(t) = A[x(t) − x * ]

x(t) = x * + eAt[x(0) − x * ].

Putzer Algorithm[1] for computing eAt

Given a matrix A with eigenvalues \lambda_1,\lambda_2,\dots,\lambda_n then

e^{At} = \sum_{j=0}^{n-1}r_{j+1}{\left(t\right)}P_{j}

Where

P0 = I
P_j = \prod_{k=1}^{j}\left(A-\lambda_k I\right)= P_{j-1} \left(A-\lambda_j I\right), j=1,2,\dots,n-1


\dot{r}_1 = \lambda_1 r_1
r_1{\left(0\right)}=1


\dot{r}_{j} = \lambda_j r_j + r_{j-1}, j=2,3,\dots,n
r_j{\left(0\right)}=0, j=2,3,\dots,n

The equations for r_i{\left(t\right)} are simple first order nonhomogeneous ODEs.

Notice the algorithm does not require that the matrix A is diagonalizable and avoids the complexity of using Jordan canonical form when it is not needed.

Deconstructed example of a matrix ordinary differential equation

A first-order homogeneous matrix ordinary differential equation in two functions x(t) and y(t), when taken out of matrix form, has the following form:

\frac{dx}{dt}=a_1x+b_1y,\quad\frac{dy}{dt}=a_2x+b_2y

where a_1, a_2, b_1  \,\! and b_2  \,\! may be any arbitrary scalars.

Higher order matrix ODE's may possess a much more complicated form.

Solving deconstructed matrix ordinary differential equations

The process of solving the above equations and finding the required functions, of this particular order and form, consists of 3 main steps. Brief descriptions of each of these steps are listed below:

  • Finding the eigenvalues
  • Finding the eigenvectors
  • Finding the needed functions

The final, third, step in solving these sorts of ordinary differential equations is usually done by means of plugging in the values, calculated in the two previous steps into a specialized general form equation, which will be mentioned later in the article.

Solved example of a matrix ODE

In order to show how the matrix ODE's may actually be solved according to the 3 above steps, by means of using simple matrices in the process, let us find, say function y  \,\! and function z  \,\!, both in terms of the single underlying variable x, in the following linear differential equation of the first order:

\frac{dy}{dx}=3y-4z,\quad\frac{dz}{dx}=4y-7z.

In order to solve this particular ordinary differential equation, at some point of the solving process, we are going to need a so-called initial value, a starting point, which, in this particular case, we pick to be  y(0)=z(0)=1.  \,\!

First step

The first step, that has already been mentioned above, is finding the eigenvalues. The process of finding the eigenvalues is not a very difficult process. Both eigenvalues and eigenvectors are found to be useful in numerous branches of mathematics, including higher engineering mathematics/calculations(i.e. Applied Mathematics), mechanics, physical mathematics, mathematical economics, and linear algebra.

Therefore, the process consists of the following:

\begin{pmatrix} y'\\z' \end{pmatrix} = \begin{pmatrix} 3 & -4\\4 & -7 \end{pmatrix}\begin{pmatrix} y\\z \end{pmatrix}.

The derivative notation y' etc. seen in one of the vectors above is known as Lagrange's notation, first introduced by Joseph Louis Lagrange. It is equivalent to the derivative notation dy/dx used in the previous equation, known as Leibniz's notation, honouring the name of Gottfried Leibniz.

Once the coefficients of the two variables have been written in the matrix form shown above, we may start the process of evaluating the eigenvalues. In order to do that we are going to have to find the determinant of the matrix that is formed when an identity matrix, I_n   \,\!, multiplied by some constant lambda, symbol λ, is subtracted from our coefficient matrix in the following way:

\det\left(\begin{bmatrix} 3 & -4\\4 & -7 \end{bmatrix} - \lambda\begin{bmatrix} 1 & 0\\0 & 1 \end{bmatrix}\right).

Applying further simplification and basic rules of matrix addition we come up with the following:

\det\begin{bmatrix} 3-\lambda & -4\\4 & -7-\lambda \end{bmatrix}.

Applying the rules of finding the determinant of a single 2×2 matrix, we obtain the following elementary quadratic equation:

\det\begin{bmatrix} 3-\lambda & -4\\4 & -7-\lambda \end{bmatrix} = 0
-21 - 3\lambda + 7\lambda + \lambda^2 + 16 = 0  \,\!

which may be reduced further to get a simpler version of the above:

\lambda^2 + 4\lambda - 5 = 0  \,\!.

Now finding the two roots, \lambda_1  \,\! and \lambda_2  \,\! of the given quadratic equation by applying the factorization method we get the following:

\lambda^2 + 5\lambda - \lambda - 5 = 0  \,\!
\lambda (\lambda + 5) - 1 (\lambda + 5) = 0  \,\!
(\lambda - 1)(\lambda + 5) = 0  \,\!
\lambda = 1, -5.  \,\!.

The values, \lambda_1 = 1  \,\! and \lambda_2 = -5  \,\!, that we have calculated above are the required eigenvalues. Once these two values are found, we may proceed to the second step of the solution process. The calculated eigenvalues will be used a little later in the final solution. In some cases, say other matrix ODE's, the eigenvalues may be complex, in which case the following step of the solving process, as well as the final form and the solution, will be dramatically changed.

Second step

As it was already mentioned above, in a simple description, this step involves finding the eigenvectors by means of using the information originally given to us.

For each of the eigenvalues calculated we are going to have an individual eigenvector. For our first eigenvalue, which is \lambda_1 = 1 \,\!, we have the following:

\begin{pmatrix} 3 & -4\\4 & -7 \end{pmatrix}\begin{pmatrix} \alpha\\\beta \end{pmatrix} = 1\begin{pmatrix} \alpha\\\beta \end{pmatrix}.

Simplifying the above expression by applying basic matrix multiplication rules we have:

[3\alpha - 4\beta = \alpha] = [2\alpha = 4\beta] = [\alpha = 2\beta]  \,\!
[4\alpha - 7\beta = \beta] = [4\alpha = 8\beta] = [\alpha = 2\beta].  \,\!.

All of these calculations have been done only to obtain the last expression, which in our case is \alpha = 2\beta  \,\!. Now taking some arbitrary value, presumably a small insignificant value, which is much easier to work with, for either \alpha  \,\! or \beta  \,\! (in most cases it does not really matter), we substitute it into \alpha = 2\beta  \,\!. Doing so will produce a very simple vector, which is the required eigenvector for this particular eigenvalue. In our case, we pick \alpha = 2  \,\!, which, in turn determines that \beta = 1  \,\! and, using the standard vector notation, our vector looks like this:

\mathbf{\hat{v}}_1 = \begin{pmatrix} 2\\1 \end{pmatrix}.

Performing the same operation using the second eigenvalue we calculated, which is \,\!\, \lambda = -5\, ,, we obtain our second eigenvector. The process of working out this vector will not be shown but the final result is as follows:

\mathbf{\hat{v}}_2 = \begin{pmatrix} 1\\2 \end{pmatrix}.

Once both of the needed vectors have been found we may start our third and the last step. Do not forget that eigenvalues and eigenvectors that we have determined above will all be substituted into a specialized equation that will be shown shortly.

Third (final) step

The purpose of this final step is to actually find the required functions, that are 'hidden' behind the derivatives that were given to us originally. There are two functions because our differential equations deal with two variables.

The equation, which involves all the pieces of information that we have previously found has the following form:

\begin{pmatrix} y\\z \end{pmatrix} = Ae^{(\lambda_1)x}\hat{v}_1 + Be^{(\lambda_2)x}\hat{v}_2.

Substituting the values of eigenvalues and eigenvectors we get the following expression:

\begin{pmatrix} y\\z \end{pmatrix} = Ae^{x}\begin{pmatrix} 2\\1 \end{pmatrix} + Be^{-5x}\begin{pmatrix} 1\\2 \end{pmatrix}.

Applying further simplification rules we have:

\begin{pmatrix} 2 & 1\\1 & 2 \end{pmatrix}\begin{pmatrix} Ae^{x}\\Be^{-5x} \end{pmatrix} = \begin{pmatrix} y\\z \end{pmatrix}.

Simplifying further and writing the equations for functions y \,\! and z \,\! separately:

y = 2Ae^{x} + Be^{-5x} \,\!
z = Ae^{x} + 2Be^{-5x}. \,\!

The above equations are in fact the functions that we needed to find, but they are in their general form and if we want to actually find their exact forms and solutions, now is the time to look back at the information given to us, the so-called initial value problem. At some point during solving these equations we have come across y(0) = z(0) = 1 \,\!, which plays the role of starting point for our ordinary differential equation. Now is the time to apply this condition. As a result of application of this condition we will be able to find the constants, A and B. As we see from the y(0) = z(0) = 1 \,\! condition, when x = 0 \,\!, the overall equation is equal to 1. Thus we may construct the following system of linear equations:

1 = 2A + B \,\!
1 = A + 2B. \,\!

Solving these equations we find that both constants A and B are equal to 1/3. Therefore if we substitute these values into the general form of these two functions we will have their exact forms:

y = \frac{2}{3}e^{x} + \frac{1}{3}e^{-5x} \,\!
z = \frac{1}{3}e^{x} + \frac{2}{3}e^{-5x}. \,\!

which is our final form of the two functions we were required to find.

See also

References

  1. ^ [1], Putzer's Original Paper on JSTOR

Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Matrix difference equation — A matrix difference equation[1][2] is a difference equation in which the value of a vector (or sometimes, a matrix) of variables at one point in time is related to its own value at one or more previous points in time, using matrices. Occasionally …   Wikipedia

  • Ordinary differential equation — In mathematics, an ordinary differential equation (or ODE) is a relation that contains functions of only one independent variable, and one or more of their derivatives with respect to that variable. A simple example is Newton s second law of… …   Wikipedia

  • Partial differential equation — A visualisation of a solution to the heat equation on a two dimensional plane In mathematics, partial differential equations (PDE) are a type of differential equation, i.e., a relation involving an unknown function (or functions) of several… …   Wikipedia

  • Differential equation — Not to be confused with Difference equation. Visualization of heat transfer in a pump casing, created by solving the heat equation. Heat is being generated internally in the casing and being cooled at the boundary, providing a steady state… …   Wikipedia

  • Hypergeometric differential equation — In mathematics, the hypergeometric differential equation is a second order linear ordinary differential equation (ODE) whose solutions are given by the classical hypergeometric series. Every second order linear ODE with three regular singular… …   Wikipedia

  • Riemann's differential equation — In mathematics, Riemann s differential equation is a generalization of the hypergeometric differential equation, allowing the regular singular points to occur anywhere on the Riemann sphere, rather than merely at 0,1, and infin;.DefinitionThe… …   Wikipedia

  • Hyperbolic partial differential equation — In mathematics, a hyperbolic partial differential equation is usually a second order partial differential equation (PDE) of the form :A u {xx} + 2 B u {xy} + C u {yy} + D u x + E u y + F = 0 with: det egin{pmatrix} A B B C end{pmatrix} = A C B^2 …   Wikipedia

  • Parabolic partial differential equation — A parabolic partial differential equation is a type of second order partial differential equation, describing a wide family of problems in science including heat diffusion and stock option pricing. These problems, also known as evolution problems …   Wikipedia

  • Matrix exponential — In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. Abstractly, the matrix exponential gives the connection between a matrix Lie algebra and the corresponding Lie group.… …   Wikipedia

  • Differential geometry of surfaces — Carl Friedrich Gauss in 1828 In mathematics, the differential geometry of surfaces deals with smooth surfaces with various additional structures, most often, a Riemannian metric. Surfaces have been extensively studied from various perspectives:… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”