In-place matrix transposition

In-place matrix transposition

In-place matrix transposition, also called in-situ matrix transposition, is the problem of transposing an N imes M matrix in-place in computer memory: ideally with O(1) (bounded) additional storage, or at most with additional storage much less than NM. Typically, the matrix is assumed to be stored in row-major order or column-major order (i.e., contiguous rows or columns, respectively, arranged consecutively).

Performing an in-place transpose (in-situ transpose) is most difficult when N eq M, i.e. for a non-square (rectangular) matrix, where it involves a complicated permutation of the data elements, with many cycles of length greater than 2. In contrast, for a square matrix (N = M), all of the cycles of are length 1 or 2, and the transpose can achieved by a simple loop to swap the upper triangle of the matrix with the lower triangle. Further complications arise if one wishes to maximize memory locality, however, to improve cache line utilization or to operate out-of-core (where the matrix does not fit into main memory), since transposes inherently involve non-consecutive memory accesses.

The problem of non-square in-place transposition has been studied since at least the late 1950s, and several algorithms are known, including several which attempt to optimize locality for cache, out-of-core, or similar memory-related contexts.

Background

On a computer, one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order. For example, software libraries for linear algebra, such the BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement.

However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in row-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a fast Fourier transform algorithm (e.g. Frigo & Johnson, 2005), transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing memory locality. Since these situations normally coincide with the case of very large matrices (which exceed the cache size), performing the transposition in-place with minimal additional storage becomes desirable.

Also, as a purely mathematical problem, in-place transposition involves a number of interesting number theory puzzles that have been worked out over the course of several decades.

Example

For example, consider the 2 imes4 matrix:

:egin{bmatrix} 0 & 1 & 2 & 3 \ 4 & 5 & 6 & 7end{bmatrix}.

In row-major format, this would be stored in computer memory as the sequence (0,1,2,3,4,5,6,7), i.e. the two rows stored consecutively. If we transpose this, we obtain the 4 imes2 matrix:

:egin{bmatrix} 0 & 4 \ 1 & 5 \ 2 & 6 \ 3 & 7end{bmatrix}

which is stored in computer memory as the sequence (0,4,1,5,2,6,3,7).

If we number the storage locations 0 to 7, from left to right, then this permutation consists of four cycles:

:(0), (1 2 4), (3 6 5), (7)

That is, position 0 goes to position 0 (a cycle of length 1, no data motion). And position 1 (in 0,1,2,...) goes to position 2 (in 0,4,1,...), while 2 goes to position 4 (in 0,4,1,5,2,...), while position 4 goes back to position 1 (in 0,4,1,...). Similarly for position 7 and positions (3 6 5).

Properties of the permutation

In the following, we assume that the N imes M matrix is stored in row-major order with zero-based indices. This means that the (n,m) element, for n = 0,ldots,N-1 and m = 0,ldots,M-1, is stored at an address a = Mn + m (plus some offset in memory, which we ignore). In the transposed M imes N matrix, the corresponding (m,n) element is stored at the address a' = Nm + n, again in row-major order. We define the "transposition permutation" to be the function a' = P(a) such that::Nm + n = P(Mn + m) , for all (n,m) in [0,N-1] imes [0,M-1] ,.This defines a permutation on the numbers n = 0,ldots,MN-1.

It turns out that one can define simple formulas for "P" and its inverse (Cate & Twigg, 1977). First:

:P(a) = left{ egin{matrix}MN - 1 & mbox{if } a = MN - 1, \Na mod MN - 1 & mbox{otherwise},end{matrix} ight.

where "mod" is the modulo operation. Proof: if 0 leq a = Mn + m < MN - 1, then Na mod (MN-1) = MN n + Nm mod (MN - 1) = n + Nm. [Note that MN x mod (MN-1) = (MN - 1) x + x mod (MN-1) = x for 0 leq x < MN - 1.] Note that the first (a = 0) and last (a = MN-1) elements are always left invariant under transposition. Second, the inverse permutation is given by:

:P^{-1}(a') = left{ egin{matrix}MN - 1 & mbox{if } a' = MN - 1, \Ma' mod MN - 1 & mbox{otherwise}.end{matrix} ight.

(This is just a consequence of the fact that the inverse of an N imes M transpose is an M imes N transpose, although it is also easy to show explicitly that P^{-1} composed with P gives the identity.)

As proved by Cate & Twigg (1977), the number of fixed points (cycles of length 1) of the permutation is precisely 1 + gcd("N"−1,"M"−1), where gcd is the greatest common divisor. For example, with "N" = "M" the number of fixed points is simply "N" (the diagonal of the matrix). If "N" − 1 and "M" − 1 are coprime, on the other hand, the only two fixed points are the upper-left and lower-right corners of the matrix.

The number of cycles of any length "k"&gt;1 is given by (Cate & Twigg, 1977):

:frac{1}{k} sum_{d | k} mu(k/d) gcd(N^d - 1, MN - 1) ,

where μ is the Möbius function and the sum is over the divisors "d" of "k".

Furthermore, the cycle containing "a"=1 (i.e. the second element of the first row of the matrix) is always a cycle of maximum length "L", and the lengths "k" of all other cycles must be divisors of "L" (Cate & Twigg, 1977).

For a given cycle "C", every element x in C has the same greatest common divisor d = gcd(x, MN - 1). Proof (Brenner, 1973): Let "s" be the smallest element of the cycle, and d = gcd(s, MN - 1). From the definition of the permutation "P" above, every other element "x" of the cycle is obtained by repeatedly multiplying "s" by "N" modulo "MN"−1, and therefore every other element is divisible by "d". But, since "N" and "MN" − 1 are coprime, "x" cannot be divisible by any factor of "MN" − 1 larger than "d", and hence d = gcd(x, MN - 1). This theorem is useful in searching for cycles of the permutation, since an efficient search can look only at multiples of divisors of "MN"−1 (Brenner, 1973).

Laflin & Brebner (1970) pointed out that the cycles often come in pairs, which is exploited by several algorithms that permute pairs of cycles at a time. In particular, let "s" be the smallest element of some cycle "C" of length "k". It follows that "MN"−1−"s" is also an element of a cycle of length "k" (possibly the same cycle). Proof: by the definition of "P" above, the length "k" of the cycle containing "s" is the smallest "k" &gt; 0 such that s N^k = s mod (MN - 1). Clearly, this is the same as the smallest "k"&gt;0 such that (-s) N^k = -s mod (MN - 1), since we are just multiplying both sides by −1, and MN-1-s = -s mod (MN - 1).

Algorithms

The following briefly summarizes the published algorithms to perform in-place matrix transposition. Source code implementing some of these algorithms can be found in the references, below.

quare matrices

For a square N imes N matrix A_{n,m} = A(n,m), in-place transposition is easy because all of the cycles have length 1 (the diagonals A_{n,n}) or length 2 (the upper triangle is swapped with the lower triangle. Pseudocode to accomplish this (assuming zero-based array indices) is:

for n = 0 to N - 2 for m = n + 1 to N - 1 swap A(n,m) with A(m,n)

This type of implementation, while simple, can exhibit poor performance due to poor cache-line utilization, especially when "N" is a power of two (due to cache-line conflicts in a CPU cache with limited associativity). The reason for this is that, as "m" is incremented in the inner loop, the memory address corresponding to "A"("n","m") or "A"("m","n") jumps discontiguously by "N" in memory (depending on whether the array is in column-major or row-major format, respectively). That is, the algorithm does not exploit the possibility of spatial locality.

One solution to improve the cache utilization is to "block" the algorithm to operate on several numbers at once, in blocks given by the cache-line size; unfortunately, this means that the algorithm depends on the size of the cache line (it is "cache-aware"), and on a modern computer with multiple levels of cache it requires multiple levels of machine-dependent blocking. Instead, it has been suggested (Frigo "et al.", 1999) that better performance can be obtained by a recursive algorithm: divide the matrix into four submatrices of roughly equal size, transposing the two submatrices along the diagonal recursively and transposing and swapping the two submatrices above and below the diagonal. (When "N" is sufficiently small, the simple algorithm above is used as a base case, as naively recursing all the way down to "N"=1 would have excessive function-call overhead.) This is a cache-oblivious algorithm, in the sense that it can exploit the cache line without the cache-line size being an explicit parameter.

Following the cycles

For non-square matrices, the algorithms are more complicated. Many of the algorithms prior to 1980 could be described as "follow-the-cycles" algorithms. That is, they loop over the cycles, moving the data from one location to the next in the cycle. In pseudocode form:

for each length&gt;1 cycle "C" of the permutation pick a starting address "s" in "C" let "D" = data at "s" let "x" = predecessor of "s" in the cycle while "x" ≠ "s" move data from "x" to successor of "x" let "x" = predecessor of "x" move data from "D" to successor of "s"

The differences between the algorithms lie mainly in how they locate the cycles, how they find the starting addresses in each cycle, and how they ensure that each cycle is moved exactly once. Typically, as discussed above, the cycles are moved in pairs, since "s" and "MN"−1−"s" are in cycles of the same length (possibly the same cycle). Sometimes, a small scratch array, typically of length "M"+"N" (e.g. Brenner, 1973; Cate & Twigg, 1977) is used to keep track of a subset of locations in the array that have been visited, to accelerate the algorithm.

In order to determine whether a given cycle has been moved already, the simplest scheme would be to use "O"("MN") auxiliary storage, one bit per element, to indicate whether a given element has been moved. To use only "O"("M"+"N") or even "O"(log "MN") auxiliary storage, more complicated algorithms are required, and the known algorithms have a worst-case linearithmic computational cost of "O"("MN" log "MN") at best, as first proved by Knuth (Fich "et al.", 1995; Gustavson &amp; Swirszcz, 2007).

Such algorithms are designed to move each data element exactly once. However, they also involve a considerable amount of arithmetic to compute the cycles, and require heavily non-consecutive memory accesses since the adjacent elements of the cycles differ by multiplicative factors of "N", as discussed above.

Improving memory locality at the cost of greater total data movement

Several algorithms have been designed to achieve greater memory locality at the cost of greater data movement, as well as slightly greater storage requirements. That is, they may move each data element more than once, but they involve more consecutive memory access (greater spatial locality), which can improve performance on modern CPUs that rely on caches, as well as on SIMD architectures optimized for processing consecutive data blocks. The oldest context in which the spatial locality of transposition seems to have been studied is for out-of-core operation (by Alltop, 1975), where the matrix is too large to fit into main memory ("core").

For example, if "d" = gcd("N","M") is not small, one can perform the transposition using a small amount ("NM"/"d") of additional storage, with at most three passes over the array (Alltop, 1975; Dow, 1995). Two of the passes involve a sequence of separate, small transpositions (which can be performed efficiently out of place using a small buffer) and one involves an in-place "d"&times;"d" square transposition of NM/d^2 blocks (which is efficient since the blocks being moved are large and consecutive, and the cycles are of length at most 2). For the case where |"N" − "M"| is small, Dow (1995) describes another algorithm requiring |"N" − "M"|⋅min("N","M") additional storage, involving a min("N", "M") &times; min("N", "M") square transpose preceded or followed by a small out-of-place transpose. Frigo & Johnson (2005) describe the adaptation of these algorithms to use cache-oblivious techniques for general-purpose CPUs relying on cache lines to exploit spatial locality.

Work on out-of-core matrix transposition, where the matrix does not fit in main memory and must be stored largely on a hard disk, has focused largely on the "N" = "M" square-matrix case, with some exceptions (e.g. Alltop, 1975). Recent reviews of out-of-core algorithms, especially as applied to parallel computing, can be found in e.g. Suh & Prasanna (2002) and Krishnamoorth et al. (2004).

References

* P. F. Windley, "Transposing matrices in a digital computer," "Computer Journal" 2, p. 47-48 (1959).
* G. Pall, and E. Seiden, "A problem in Abelian Groups, with application to the transposition of a matrix on an electronic computer," "Math. Comp." 14, p. 189-192 (1960).
* J. Boothroyd, " [http://portal.acm.org/citation.cfm?id=363304&dl=GUIDE&coll=GUIDE&CFID=436989&CFTOKEN=18491885 Algorithm 302: Transpose vector stored array] ," "ACM Transactions on Mathematical Software" 10 (5), p. 292-293 (1967).
* Susan Laflin and M. A. Brebner, " [http://portal.acm.org/citation.cfm?id=362368&dl=GUIDE&coll=GUIDE&CFID=436989&CFTOKEN=18491885 Algorithm 380: in-situ transposition of a rectangular matrix] ," "ACM Transactions on Mathematical Software" 13 (5), p. 324-326 (1970). [http://www.netlib.org/toms/380 Source code] .
* Norman Brenner, " [http://portal.acm.org/citation.cfm?id=362542&dl=GUIDE&coll=GUIDE&CFID=436989&CFTOKEN=18491885 Algorithm 467: matrix transposition in place] ," "ACM Transactions on Mathematical Software" 16 (11), p. 692-694 (1973). [http://www.netlib.org/toms/467 Source code] .
* W. O. Alltop, "A computer algorithm for transposing nonsquare matrices," "IEEE Trans. Comput." 24 (10), p. 1038-1040 (1975).
* Esko G. Cate and David W. Twigg, " [http://portal.acm.org/citation.cfm?id=355719.355729&coll=GUIDE&dl=GUIDE&CFID=436989&CFTOKEN=18491885 Algorithm 513: Analysis of In-Situ Transposition] ," "ACM Transactions on Mathematical Software" 3 (1), p. 104-110 (1977). [http://www.netlib.org/toms/513 Source code] .
* Murray Dow, "Transposing a matrix on a vector computer," "Parallel Computing" 21 (12), p. 1997-2005 (1995).
* Donald E. Knuth, "The Art of Computer Programming Volume 1: Fundamental Algorithms", third edition, section 1.3.3 exercise 12 (Addison-Wesley: New York, 1997).
* M. Frigo, C. E. Leiserson, H. Prokop, and S. Ramachandran, " [http://supertech.lcs.mit.edu/cilk/papers/abstracts/abstract4.html Cache-oblivious algorithms] ," in "Proceedings of the 40th IEEE Symposium on Foundations of Computer Science" (FOCS 99), p. 285-297 (1999). [http://ieeexplore.ieee.org/iel5/6604/17631/00814600.pdf?arnumber=814600 Extended abstract at IEEE] , [http://citeseer.ist.psu.edu/307799.html at Citeseer] .
* J. Suh and V. K. Prasanna, " [http://www.east.isi.edu/~jsuh/publ/tc_113280.pdf An efficient algorithm for out-of-core matrix transposition] ," "IEEE Trans. Computers" 51 (4), p. 420-438 (2002).
* S. Krishnamoorthy, G. Baumgartner, D. Cociorva, C.-C. Lam, and P. Sadayappan, " [http://csc.lsu.edu/~gb/TCE//Publications/ParTranspose2.pdf Efficient parallel out-of-core matrix transposition] ," "International Journal of High Performance Computing and Networking" 2 (2-4), p. 110-119 (2004).
* M. Frigo and S. G. Johnson, " [http://fftw.org/fftw-paper-ieee.pdf The Design and Implementation of FFTW3] ," "Proceedings of the IEEE" 93 (2), 216–231 (2005). [http://www.fftw.org Source code] of the FFTW library, which includes optimized serial and parallel square and non-square transposes, in addition to FFTs.
* Faith E. Fich, J. Ian Munro, and Patricio V. Poblete, "Permuting in place," "SIAM Journal on Computing" 24 (2), p. 266-278 (1995).
* Fred G. Gustavson and Tadeusz Swirszcz, "In-place transposition of rectangular matrices," "Lecture Notes in Computer Science" 4699, p. 560-569 (2007), from the Proceedings of the 2006 Workshop on State-of-the-Art ["sic"] in Scientific and Parallel Computing (PARA 2006) (Umeå, Sweden, June 2006).
* [http://www.research.att.com/~njas/sequences/A093055 Sequence A093055] , number of non-singleton cycles for in-situ transpositions, "The On-Line Encyclopedia of Integer Sequences".
* [http://www.research.att.com/~njas/sequences/A093056 Sequence A093056] , length of the longest cycle for in-situ transpositions, "The On-Line Encyclopedia of Integer Sequences".
* [http://www.research.att.com/~njas/sequences/A093057 Sequence A093057] , number of fixed points − 2 for in-situ transpositions, "The On-Line Encyclopedia of Integer Sequences".

External Links

ource code

* [http://romo661.free.fr/offt.html OFFT] - recursive block in-place transpose of square matrices, in Fortran
* [http://groups.google.com/group/sci.math.num-analysis/msg/680211b3fbac30c4?hl=en Jason Stratos Papadopoulos] , blocked in-place transpose of square matrices, in C, "sci.math.num-analysis" newsgroup (April 7, 1998).
* See "Source code" links in the references section above, for additional code to perform in-place transposes of both square and non-square matrices.


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Matrix ring — In abstract algebra, a matrix ring is any collection of matrices forming a ring under matrix addition and matrix multiplication. The set of n×n matrices with entries from another ring is a matrix ring, as well as some subsets of infinite matrices …   Wikipedia

  • Matrix (mathematics) — Specific elements of a matrix are often denoted by a variable with two subscripts. For instance, a2,1 represents the element at the second row and first column of a matrix A. In mathematics, a matrix (plural matrices, or less commonly matrixes)… …   Wikipedia

  • Rotation matrix — In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian… …   Wikipedia

  • Transpose — This article is about the transpose of a matrix. For other uses, see Transposition In linear algebra, the transpose of a matrix A is another matrix AT (also written A′, Atr or At) created by any one of the following equivalent actions: reflect A… …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • List of mathematics articles (I) — NOTOC Ia IA automorphism ICER Icosagon Icosahedral 120 cell Icosahedral prism Icosahedral symmetry Icosahedron Icosian Calculus Icosian game Icosidodecadodecahedron Icosidodecahedron Icositetrachoric honeycomb Icositruncated dodecadodecahedron… …   Wikipedia

  • Cache-oblivious algorithm — In computing, a cache oblivious algorithm is an algorithm designed to exploit the CPU cache without having the size of the cache (or the length of the cache lines, etcetera) as an explicit parameter. An optimal cache oblivious algorithm is a… …   Wikipedia

  • cryptology — cryptologist, n. cryptologic /krip tl oj ik/, cryptological, adj. /krip tol euh jee/, n. 1. cryptography. 2. the science and study of cryptanalysis and cryptography. [1635 45; < NL cryptologia. See CRYPTO , LOGY] * * * Introduction …   Universalium

  • Permutation — For other uses, see Permutation (disambiguation). The 6 permutations of 3 balls In mathematics, the notion of permutation is used with several slightly different meanings, all related to the act of permuting (rearranging) objects or values.… …   Wikipedia

  • Permutation — En mathématiques, la notion de permutation exprime l idée de réarrangement d objets discernables. Une permutation de n objets distincts rangés dans un certain ordre, correspond à un changement de l ordre de succession de ces n objets. La… …   Wikipédia en Français

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”