Permutation matrix

Permutation matrix

In mathematics, in matrix theory, a permutation matrix is a square (0,1)-matrix that has exactly one entry 1 in each row and each column and 0's elsewhere. Each such matrix represents a specific permutation of m elements and, when used to multiply another matrix, can produce that permutation in the rows or columns of the other matrix.

Definition

Given a permutation π of "m" elements,:pi : lbrace 1, ldots, m brace o lbrace 1, ldots, m bracegiven in two-line form by :egin{pmatrix} 1 & 2 & cdots & m \ pi(1) & pi(2) & cdots & pi(m) end{pmatrix},its permutation matrix is the "m × m" matrix "P"π whose entries are all 0 except that in row "i", the entry π("i") equals 1. We may write:P_pi = egin{bmatrix} mathbf e_{pi(1)} \ mathbf e_{pi(2)} \ vdots \ mathbf e_{pi(m)} end{bmatrix},where mathbf e_j denotes a row vector of length "m" with 1 in the "j"th position and 0 in every other position.

Properties

Given two permutations π and σ of "m" elements and the corresponding permutation matrices "P"π and "P"σ:P_{pi} P_{sigma} = P_{pi circ sigma}

As permutation matrices are orthogonal matrices (i.e., P_{pi}P_{pi}^{T} = I), the inverse matrix exists and can be written as:P_{pi}^{-1} = P_{pi^{-1 = P_{pi}^{T}.

Multiplying P_{pi} times a column vector g will permute the rows of the vector::P_pi mathbf{g} = egin{bmatrix}mathbf{e}_{pi(1)} \mathbf{e}_{pi(2)} \vdots \mathbf{e}_{pi(n)}end{bmatrix}

egin{bmatrix}g_1 \g_2 \vdots \g_nend{bmatrix}=egin{bmatrix}g_{pi(1)} \g_{pi(2)} \vdots \g_{pi(n)}end{bmatrix}.

Multiplying a row vector h times P_{pi} will permute the columns of the vector::mathbf{h}P_pi= egin{bmatrix} h_1 ; h_2 ; dots ; h_n end{bmatrix}

egin{bmatrix}mathbf{e}_{pi(1)} \mathbf{e}_{pi(2)} \vdots \mathbf{e}_{pi(n)}end{bmatrix}=egin{bmatrix} h_{pi(1)} ; h_{pi(2)} ; dots ; h_{pi(n)} end{bmatrix}

Notes

Let "Sn" denote the symmetric group, or group of permutations, on {1,2,...,"n"}. Since there are "n"! permutations, there are "n"! permutation matrices. By the formulas above, the "n" × "n" permutation matrices form a group under matrix multiplication with the identity matrix as the identity element.

If (1) denotes the identity permutation, then "P"(1) is the identity matrix.

One can view the permutation matrix of a permutation σ as the permutation σ of the columns of the identity matrix "I", or as the permutation σ−1 of the rows of "I".

A permutation matrix is a doubly stochastic matrix. The Birkhoff–von Neumann Theorem says that every doubly stochastic matrix is a convex combination of permutation matrices of the same order and the permutation matrices are the extreme points of the set of doubly stochastic matrices.

The product "PM", premultiplying a matrix "M" by a permutation matrix "P", permutes the rows of "M"; row "i" moves to row π("i"). Likewise, "MP" permutes the columns of "M".

The map "S""n" → A ⊂ GL("n", Z2) is a faithful representation. Thus, |A| = "n"!.

The trace of a permutation matrix is the number of fixed points of the permutation. If the permutation has fixed points, so it can be written in cycle form as π = ("a"1)("a"2)...("a""k")σ where σ has no fixed points, then "e""a"1,"e""a"2,...,"e""a""k" are eigenvectors of the permutation matrix.

From group theory we know that any permutation may be written as a product of transpositions. Therefore, any permutation matrix "P" factors as a product of row-interchanging elementary matrices, each having determinant −1. Thus the determinant of a permutation matrix "P" is just the signature of the corresponding permutation.

Examples

The permutation matrix "P"π corresponding to the permutation π = (1 4 2 5 3) is:P_pi = egin{bmatrix}mathbf{e}_{pi(1)} \mathbf{e}_{pi(2)} \mathbf{e}_{pi(3)} \mathbf{e}_{pi(4)} \mathbf{e}_{pi(5)} end{bmatrix}=egin{bmatrix}mathbf{e}_{1} \mathbf{e}_{4} \mathbf{e}_{2} \mathbf{e}_{5} \mathbf{e}_{3} end{bmatrix}=egin{bmatrix} 1 & 0 & 0 & 0 & 0 \0 & 0 & 0 & 1 & 0 \0 & 1 & 0 & 0 & 0 \0 & 0 & 0 & 0 & 1 \0 & 0 & 1 & 0 & 0 end{bmatrix}.

Given a vector g,:P_pi mathbf{g}=egin{bmatrix}mathbf{e}_{pi(1)} \mathbf{e}_{pi(2)} \mathbf{e}_{pi(3)} \mathbf{e}_{pi(4)} \mathbf{e}_{pi(5)} end{bmatrix}

egin{bmatrix}g_1 \g_2 \g_3 \g_4 \g_5end{bmatrix}=egin{bmatrix}g_1 \g_4 \g_2 \g_5 \g_3end{bmatrix}.

Solving for "P"

If we are given two matrices "A" and "B" which are known to be related as B = P A P^{-1}, but the permutation matrix "P" itself is unknown, we can find "P" using eigenvalue decomposition:

:A = Q_A Lambda Q_A^{-1} : B = Q_B Lambda Q_B^{-1}

where Lambda is a diagonal matrix of eigenvalues, and Q_A and Q_B are the matrices of eigenvectors. The eigenvalues of A and B will always be the same, and "P" can be computed as P = Q_B Q_A^T. In other words, P Q_A = Q_B, which means that the eigenvectors of "B" are simply permuted eigenvectors of "A".

Example

Given the two matrices

:A=egin{bmatrix} 0 & 1 & 2 \ 1 & 0 & 1.5 \ 2 & 1.5 & 0end{bmatrix}:B=egin{bmatrix} 0 & 1 & 1.5 \ 1 & 0 & 2 \ 1.5 & 2 & 0end{bmatrix}

and the transformation matrix that changes A into B is

:P=egin{bmatrix} 0 & 1 & 0 \ 1 & 0 & 0 \ 0 & 0 & 1end{bmatrix}

which says that the first & second row as well as the first & second column of A have been swapped to yield B (and visual inspection confirms this).

After finding the eigenvalues of both A and B and diagonalizing them into a diagonal matrix is

:Lambda=egin{bmatrix} -2.09394 & 0 & 0 \ 0 & 0.9433954 & 0 \ 0 & 0 & 3.037337end{bmatrix}

and the Q_A matrix of eigenvectors for A is

:Q_A=egin{bmatrix} -0.60130 & 0.54493 & 0.58437 \ -0.25523 & -0.82404 & 0.50579 \ 0.75716 & 0.15498 & 0.63458end{bmatrix}

and the Q_B matrix of eigenvectors for B is

:Q_B=egin{bmatrix} -0.25523 & -0.82404 & -0.50579 \ -0.60130 & 0.54493 & -0.58437 \ 0.75716 & 0.15498 & -0.63458end{bmatrix}.

Comparing the first eigenvector (i.e., the first column) of both we can write the first column of P by noting that the first element (Q_{A(1,1)} = -0.60130) matches the second element (Q_{B(2,1)}), thus we put a 1 in the second element of the first column of P.Repeating this procedure, we match the second element (Q_{A(2,1)}) to the first element (Q_{B(1,1)}), thus we put a 1 in the first element of the second column of P; and the third element (Q_{A(3,1)}) to the third element (Q_{B(3,1)}), thus we put a 1 in the third element of the third column of P.

The resulting P matrix is:

:P=egin{bmatrix} 0 & 1 & 0 \ 1 & 0 & 0 \ 0 & 0 & 1end{bmatrix}

And comparing to the P matrix from above, we find they are the same.

Explanation

A permutation matrix will always be in the form :egin{bmatrix}mathbf{e}_{a_1} \mathbf{e}_{a_2} \vdots \mathbf{e}_{a_j} \end{bmatrix}where e"a""i" represents the "i"th basis vector (as a row) for R"j", and where:egin{bmatrix} 1 & 2 & ldots & j \a_1 & a_2 & ldots & a_jend{bmatrix}is the permutation form of the permutation matrix.

Now, in performing matrix multiplication, one essentially forms the dot product of each row of the first matrix with each column of the second. In this instance, we will be forming the dot product of each column of this matrix with the vector with elements we want to permute. That is, for example, = ("g"0,...,"g"5)T, :e"a""i"·v="g""a""i"

So, the product of the permutation matrix with the vector v above, will be a vector in the form ("g""a"1, "g""a"2, ..., "g""a""j"), and that this then is a permutation of v since we have said that the permutation form is :egin{pmatrix} 1 & 2 & ldots & j \a_1 & a_2 & ldots & a_jend{pmatrix}So, permutation matrices do indeed permute the order of elements in vectors multiplied with them.

Matrices with constant line sums

The sum of the values in each column or row in a permutation matrix adds up to exactly 1. A possible generalization of permutation matrices is nonnegative integral matrices where the values of each column and row add up to a constant number "c". A matrix of this sort is known to be the sum of "c" permutation matrices.

For example in the following matrix "M" each column or row adds up to 5.:M =egin{bmatrix} 5 & 0 & 0 & 0 & 0 \0 & 3 & 2 & 0 & 0 \0 & 0 & 0 & 5 & 0 \0 & 1 & 2 & 0 & 2 \0 & 1 & 1 & 0 & 3 end{bmatrix}.This matrix is the sum of 5 permutation matrices.

See also

* Alternating sign matrix
* Generalized permutation matrix


Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • permutation matrix — kėlinių matrica statusas T sritis fizika atitikmenys: angl. permutation matrix vok. Permutationsmatrix, f rus. матрица перестановок, f pranc. matrice de permutations, f …   Fizikos terminų žodynas

  • Generalized permutation matrix — In mathematics, a generalized permutation matrix (or monomial matrix) is a matrix with the same nonzero pattern as a permutation matrix, i.e. there is exactly one nonzero entry in each row and each column. Unlike a permutation matrix, where the… …   Wikipedia

  • Permutation — For other uses, see Permutation (disambiguation). The 6 permutations of 3 balls In mathematics, the notion of permutation is used with several slightly different meanings, all related to the act of permuting (rearranging) objects or values.… …   Wikipedia

  • Matrix decomposition — In the mathematical discipline of linear algebra, a matrix decomposition is a factorization of a matrix into some canonical form. There are many different matrix decompositions; each finds use among a particular class of problems. Contents 1… …   Wikipedia

  • Matrix theory — is a branch of mathematics which focuses on the study of matrices. Initially a sub branch of linear algebra, it has grown to cover subjects related to graph theory, algebra, combinatorics, and statistics as well.HistoryThe term matrix was first… …   Wikipedia

  • Permutation avec repetition — Permutation avec répétition En mathématiques, les permutations avec répétition d objets dont certains sont indifférenciés sont les divers groupements ordonnés de tous ces objets. Par exemple, 112, 121 et 211 pour deux chiffres 1 et un chiffre 2.… …   Wikipédia en Français

  • Permutation — Permutationen dreier Kugeln Unter einer Permutation (von lateinisch permutare ‚(ver)tauschen‘) versteht man die Veränderung der Anordnung einer Menge durch Vertauschen ihrer Elemente. In der Mathematik ist eine Permutation eine bijektive… …   Deutsch Wikipedia

  • Matrix group — In mathematics, a matrix group is a group G consisting of invertible matrices over some field K, usually fixed in advance, with operations of matrix multiplication and inversion. More generally, one can consider n × n matrices over a commutative… …   Wikipedia

  • Permutation — En mathématiques, la notion de permutation exprime l idée de réarrangement d objets discernables. Une permutation de n objets distincts rangés dans un certain ordre, correspond à un changement de l ordre de succession de ces n objets. La… …   Wikipédia en Français

  • Permutation avec répétition — En mathématiques, les permutations avec répétition d objets dont certains sont indifférenciés sont les divers groupements ordonnés de tous ces objets. Par exemple, 112, 121 et 211 pour deux chiffres 1 et un chiffre 2. Lorsque nous permutons n… …   Wikipédia en Français

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”