# System of linear equations

System of linear equations

In mathematics, a system of linear equations (or linear system) is a collection of linear equations involving the same set of variables. For example,:is a system of three equations in the three variables $x, y, z,!$. A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by:since it makes all three equations valid. [Linear algebra, as discussed in this article, is a very well-established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in Lay 2005, Meyer 2001, and Strang 2005.]

In mathematics, the theory of linear systems is a branch of linear algebra, a subject which is fundamental to modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and such methods play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.

Elementary example

The simplest kind of linear system involves two equations and two variables::One method for solving such a system is as follows. First, solve the top equation for "x" in terms of "y"::$x = 3 - frac\left\{3\right\}\left\{2\right\}y.$Now substitute this expression for "x" into the bottom equation::$4left\left( 3 - frac\left\{3\right\}\left\{2\right\}y ight\right) + 9y = 15.$This results in a single equation involving only the variable "y". Solving gives "y" = 1, and substituting this back into the equation for "x" yields "x" = 3/2. This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.)

General form

A general system of "m" linear equations with "n" unknowns can be written as:Here $x_1, x_2,...,x_n$ are the unknowns, $a_\left\{11\right\}, a_\left\{12\right\},..., a_\left\{mn\right\}$ are the coefficients of the system, and $b_1, b_2,...,b_m$ are the constant terms.

Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure.

Vector equation

One extremely helpful view is that each unknown is a weight for a column vector in a linear combination.:This allows all the language and theory of "vector spaces" (or more generally, "modules") to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side is called their "span", and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a "basis" of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its "dimension") cannot be larger than "m" or "n", but it can be smaller. This is important because if we have "m" independent vectors a solution is guaranteed regardless of the right-hand side, and otherwise not guaranteed.

Matrix equation

The vector equation is equivalent to a matrix equation of the form:where "A" is an "m"×"n" matrix, x is a column vector with "n" entries, and b is a column vector with "m" entries.: The number of vectors in a basis for the span is now expressed as the "rank" of the matrix.

olution set

A solution of a linear system is an assignment of values to the variables "x"1, "x"2, , ..., "xn" such that each of the equations is satisfied. The set of all possible solutions is called the solution set.

A linear system may behave in any one of three possible ways:
# The system has infinitely many solutions.
# The system has a single unique solution.
# The system has no solutions.

Geometric interpretation

For a system involving two variables ("x" and "y"), each linear equation determines a line on the "xy"-plane. Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set.

For three variables, each linear equation determines a plane in three-dimensional space, and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set.

For "n" variables, each linear equations determines a hyperplane in "n"-dimensional space. The solution set is the intersection of these hyperplanes, which may be a flat of any dimension.

General behavior

In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns:
# Usually, a system with fewer equations than unknowns has infinitely many solutions.
# Usually, a system with the same number of equations and unknowns has a single unique solution.
# Usually, a system with more equations than unknowns has no solution.In the first case, the dimension of the solution set is usually equal to "n" &ndash; "m", where "n" is the number of variables and "m" is the number of equations.

The following pictures illustrate this trichotomy in the case of two variables::The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point.

Keep in mind that the pictures above show only the most common case. It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point). In general, a system of linear equations may behave differently than expected if the equations are linearly dependent, or if two or more of the equations are inconsistent.

Properties

Consistency

The equations of a linear system are consistent if they possess a common solution, and inconsistent otherwise. When the equations are inconsistent, it is possible to derive a contradiction from the equations, such as a proof that 0 = 1.

For example, the equations:$3x+2y=6;;;; ext\left\{and\right\};;;;3x+2y=12$are inconsistent. Subtracting the first equation from the second yields the equation 0 = 6, which is a contradiction. The graphs of these equations on the "xy"-plane are a pair of parallel lines.

It is possible for three linear equations to be inconsistent, even though any two of the equations are consistent together. For example, the equations:are inconsistent. Adding the first two equations together gives 3"x" + 2"y" = 2, which can be subtracted from the third equation to yield 0 = 1. Note that any two of these equations have a common solution. The same phenomenon can occur for any number of equations.

In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent.

Independence

The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence.

For example, the equations:$3x+2y=6;;;; ext\left\{and\right\};;;;6x+4y=12$are not independent. The second equation is just the first equation multiplied by two (and the first equation is the second equation divided by two). The graphs of these two equations are the same.

For a more complicated example, the equations:are not independent, because the third equation is the sum of the other two. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point.

Equivalence

Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice-versa. Equivalent systems convey precisely the same information about the values of the variables. In particular, two linear systems are equivalent if and only if they have the same solution set.

olving a linear system

There are several algorithms for solving a system of linear equations.

Describing the solution

It can be difficult to describe the solution set to a linear system with infinitely many solutions. Typically, some of the variables are designated as free (or independent, or as parameters), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables.

For example, consider the following system::The solution set to this system can be described by the following equations::$x=-7z-1;;;; ext\left\{and\right\};;;;y=3z+2 ext\left\{.\right\}$Here "z" is the free variable, while "x" and "y" are dependent on "z". Any point in the solution set can be obtained by first choosing a value for "z", and then computing the corresponding values for "x" and "y".

Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter "z".

Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows::$y=-frac\left\{3\right\}\left\{7\right\}x + frac\left\{11\right\}\left\{7\right\};;;; ext\left\{and\right\};;;;z=-frac\left\{1\right\}\left\{7\right\}x-frac\left\{1\right\}\left\{7\right\} ext\left\{.\right\}$Here "x" is the free variable, and "y" and "z" are dependent.

Elimination of variables

The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows:
# In the first equation, solve for the one of the variables in terms of the others.
# Plug this expression into the remaining equations. This yields a system of equations with one fewer equation and one fewer unknown.
# Continue until you have reduced the system to a single linear equation.
# Solve this equation, and then back-substitute until the entire solution is found.

For example, consider the following system::Solving the first equation for "x" gives "x" = 5 + 2"z" &ndash; 3"y", and plugging this into the second and third equation yields:Solving the first of these equations for "y" yields "y" = 2 + 3"z", and plugging this into the third equation yields "z" = 2. We now have::Substituting "z" = 2 into the second equation gives "y" = 8, and substituting "z" = 2 and "y" = 8 into the first equation yields "x" = &ndash;15. Therefore, the solution set is the single point ("x", "y", "z") = (&ndash;15, 8, 2).

Row reduction

In row reduction, the linear system is represented as an augmented matrix::This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations::Type 1: Swap the positions of two rows.:Type 2: Multiply a row by a nonzero scalar.:Type 3: Add to one row a scalar multiple of another.Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original.

There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss-Jordan elimination. The following computation shows Gaussian elimination applied to the matrix above::The last matrix is in reduced row echelon form, and represents the system "x" = &ndash;15, "y" = 8, "z" = 2. A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down.

Cramer's rule

Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system:is given by:For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms.

Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.)Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision.

Other methods

While systems of three or four equations can be readily solved by hand, computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as "pivoting". Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix "A". This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix "A" but different vectors b.

If the matrix "A" has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications.

A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods.

Homogeneous systems

A system of linear equations is homogeneous if all of the constant terms are zero::A homogeneous system is equivalent to a matrix equation of the form:$A extbf\left\{x\right\}= extbf\left\{0\right\}$where "A" is an "m" &times; "n" matrix, x is a column vector with "n" entries, and 0 is the zero vector with "m" entries.

olution set

Every homogeneous system has at least one solution, known as the zero solution (or trivial solution), which is obtained by assigning the value of zero to each of the variables. The solution set has the following additional properties:
# If u and v are two vectors representing solutions to a homogeneous system, then the vector sum u + v is also a solution to the system.
# If u is a vector representing a solution to a homogeneous system, and "r" is any scalar, then "r"u is also a solution to the system.These are exactly the properties required for the solution set to be a linear subspace of R"n". In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix "A".

Relation to nonhomogeneous systems

There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system::$A extbf\left\{x\right\}= extbf\left\{b\right\};;;; ext\left\{and\right\};;;;A extbf\left\{x\right\}= extbf\left\{0\right\} ext\left\{.\right\}$Specifically, if p is any specific solution to the linear system "A"x = b, then the entire solution set can be described as:$left\left\{ extbf\left\{p\right\}+ extbf\left\{v\right\} : extbf\left\{v\right\} ext\left\{ is any solution to \right\}A extbf\left\{x\right\}= extbf\left\{0\right\} ight\right\}.$Geometrically, this says that the solution set for "A"x = b is a translation of the solution set for "A"x = 0. Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p.

This reasoning only applies if the system "A"x = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation "A".

ee also

* LAPACK (the free standard package to solve linear equations numerically; available in Fortran, C, C++)
* Row reduction
* Simultaneous equations
* Arrangement of hyperplanes
* Linear least squares

Notes

References

* [http://www.youtube.com/watch?v=gVMRuLH6FdQ Lec 1 | 18.06 Linear Algebra, Spring 2005] , (W. Gilbert Strang), School: MIT
* [http://matri-tri-ca.narod.ru/en.slu.html online solver]

Wikimedia Foundation. 2010.

### См. также в других словарях:

• Linear least squares — is an important computational problem, that arises primarily in applications when it is desired to fit a linear mathematical model to measurements obtained from experiments. The goals of linear least squares are to extract predictions from the… …   Wikipedia

• Linear least squares (mathematics) — This article is about the mathematics that underlie curve fitting using linear least squares. For statistical regression analysis using least squares, see linear regression. For linear regression on a single variable, see simple linear regression …   Wikipedia

• Linear algebra — R3 is a vector (linear) space, and lines and planes passing through the origin are vector subspaces in R3. Subspaces are a common object of study in linear algebra. Linear algebra is a branch of mathematics that studies vector spaces, also called …   Wikipedia

• Linear system — A linear system is a mathematical model of a system based on the use of a linear operator.Linear systems typically exhibit features and properties that are much simpler than the general, nonlinear case.As a mathematical abstraction or… …   Wikipedia

• Linear combination — In mathematics, linear combinations are a concept central to linear algebra and related fields of mathematics.Most of this article deals with linear combinations in the context of a vector space over a field, with some generalisations given at… …   Wikipedia

• system of equations — or simultaneous equations In algebra, two or more equations to be solved together (i.e., the solution must satisfy all the equations in the system). For a system to have a unique solution, the number of equations must equal the number of unknowns …   Universalium

• Linear least squares/Proposed — Linear least squares is an important computational problem, that arises primarily in applications when it is desired to fit a linear mathematical model to observations obtained from experiments. Mathematically, it can be stated as the problem of… …   Wikipedia

• Linear programming — (LP, or linear optimization) is a mathematical method for determining a way to achieve the best outcome (such as maximum profit or lowest cost) in a given mathematical model for some list of requirements represented as linear relationships.… …   Wikipedia

• Linear elasticity — Continuum mechanics …   Wikipedia

• Linear regression — Example of simple linear regression, which has one independent variable In statistics, linear regression is an approach to modeling the relationship between a scalar variable y and one or more explanatory variables denoted X. The case of one… …   Wikipedia

### Поделиться ссылкой на выделенное

##### Прямая ссылка:
Нажмите правой клавишей мыши и выберите «Копировать ссылку»