- Exponentiation by squaring
**Exponentiating by squaring**is analgorithm used for the fast computation of largeinteger powers of anumber . It is also known as the**square-and-multiply**algorithm or**binary exponentiation**. Inadditive group s the appropriate name is**double-and-add**algorithm. It implicitly uses the binary expansion of the exponent. It is of quite general use, for example inmodular arithmetic .**quaring algorithm**The following recursive algorithm computes "x"

^{"n"}for a non-negativeinteger "n"::$mbox\{Power\}(x,,n)=\; egin\{cases\}\; 1,\; mbox\{if\; \}nmbox\{\; =\; 0\}\; \backslash \; x\; imesmbox\{Power\}(x,,n-1),\; mbox\{if\; \}nmbox\{\; is\; odd\}\; \backslash \; mbox\{Power\}(x,,n/2)^2,\; mbox\{if\; \}nmbox\{\; is\; even\}end\{cases\}$

Compared to the ordinary method of multiplying "x" with itself "n" − 1 times, in this algorithm the "n is even" case is optimized, according to::$x^n=x^\{n/2\}\; imes\; x^\{n/2\}$This way the algorithm uses only O(log "n") multiplications and therefore speeds up the computation of "x"

^{"n"}tremendously, in much the same way that the long multiplication algorithm speeds up multiplication over the slower method of repeated addition. The benefit is had for "n" greater than or equal to "4". (Note: when measured in terms of the size of the problem data -instead of the value of the data, n- this algorithm islinear time since the size of an integer is also its logarithm.)x

^{n}can be calculated thus, if n is an integer:

# If n < 0, {x = 1 / x; n = - n}

# i = n; y = 1; z = x

# If i is anodd number , y = y * z

# z = z^{2}

# i = i / 2, throwing away the division remainder

# If i ≠ 0, go back to step 3

# The result is y(This method works out 0^{0}as 1.)**Further applications**The same idea allows fast computation of large exponents modulo a number. Especially in

cryptography , it is useful to compute powers in a ring of integers modulo "q". It can also be used to compute integer powers in a group, using the rule:Power("x", -"n") = (Power("x", "n"))

^{-1}.The method works in every

semigroup and is often used to compute powers of matrices,For example, the evaluation of

:13789

^{722341}(mod 2345)would take a very long time and lots of storage space if the naïve method is used: compute 13789

^{722341}then take theremainder when divided by 2345. Even using a more effective method will take a long time: square 13789, take the remainder when divided by 2345, multiply theresult by 13789, and so on. This will take 722340 modular multiplications. The square-and-multiply algorithm is based on the observation that 13789^{722341}= 13789(13789^{2})^{361170}. So, if we computed 13789^{2}, then the full computation would only take 361170 modular multiplications. This is a gain of a factor of two. But since the new problem is of the same type, we can apply the same observation "again", once more approximately halving the size.The repeated application of this algorithm is equivalent to decomposing the exponent (by a base conversion to binary) into a sequence of squares and products: for example:"x"

^{13}= "x"^{1101bin}: = "x"^{(1*2^3 + 1*2^2 + 0*2^1 + 1*2^0)}: = "x"^{1*2^3}* "x"^{1*2^2}* "x"^{0*2^1}* "x"^{1*2^0}: = "x"^{2^3}* "x"^{2^2}* 1 * "x"^{2^0}: = "x"^{8}* "x"^{4}* "x"^{1}: = ("x"^{4})^{2}* ("x"^{2})^{2}* "x": = ("x"^{4}* "x"^{2})^{2}* "x": = (("x"^{2})^{2}* "x"^{2})^{2}* "x": = (("x"^{2}* "x")^{2})^{2}* "x" → algorithm needs only 5 multiplications instead of 12 (13-1)Some more examples:

* "x"^{10}= (("x"^{2})^{2}*"x")^{2}because 10 = (1,010)_{2}= 2^{3}+2^{1}, algorithm needs 4 multiplications instead of 9

* "x"^{100}= ((((("x"^{2}*"x")^{2})^{2})^{2}*"x")^{2})^{2}because 100 = (1,100,100)_{2}= 2^{6}+2^{5}+2^{2}, algorithm needs 8 multiplications instead of 99

* "x"^{1,000}= (((((((("x"^{2}*"x")^{2}*"x")^{2}*"x")^{2}*"x")^{2})^{2}*"x")^{2})^{2})^{2}because 10^{3}= (1,111,101,000)_{2}, algorithm needs 14 multiplications instead of 999

* "x"^{1,000,000}= (((((((((((((((((("x"^{2}*"x")^{2}*"x")^{2}*"x")^{2})^{2}*"x")^{2})^{2})^{2})^{2})^{2}*"x")^{2})^{2})^{2}*"x")^{2})^{2})^{2})^{2})^{2})^{2}because 10^{6}= (11,110,100,001,001,000,000)_{2}, algorithm needs 25 multiplications instead of 999,999

* "x"^{1,000,000,000}= (((((((((((((((((((((((((((("x"^{2}*"x")^{2}*"x")^{2})^{2}*"x")^{2}*"x")^{2}*"x")^{2})^{2})^{2}*"x")^{2}*"x")^{2})^{2}*"x")^{2})^{2}*"x")^{2}*"x")^{2})^{2})^{2}*"x")^{2})^{2}*"x")^{2})^{2})^{2})^{2})^{2})^{2})^{2})^{2})^{2}because 10^{9}= (111,011,100,110,101,100,101,000,000,000)_{2}, algorithm needs 41 multiplications instead of 999,999,999* [

*http://www.cs.princeton.edu/courses/archive/spr05/cos126/lectures/22.pdf Worked example (with modulo) for the RSA algorithm.*]**Example implementations****Computation by powers of 2**This is a non-recursive implementation of the above algorithm in the

Ruby programming language .In most statically typed languages,

`result=1`must be replaced with code assigning anidentity matrix of the same size as`x`to`result`to get a matrix exponentiating algorithm. In Ruby, thanks to coercion,`result`is automatically upgraded to the appropriate type, so this function works with matrices as well as with integers and floats. Note that n=n-1 is redundant when n=n/2 implicitly rounds towards zero, as lower level languages would do.The following is the

C programming language equivalent program.**Runtime example: Compute 3**^{10}parameter x = 3 parameter n = 10 result := 1

**Iteration 1**n = 10 -> n is even x := x^{2}= 3^{2}= 9 n := n / 2 = 5**Iteration 2**n = 5 -> n is odd -> result := result * x = 1 * x = 1 * 3^{2}= 9 n := n - 1 = 4 x := x^{2}= 9^{2}= 3^{4}= 81 n := n / 2 = 2**Iteration 3**n = 2 -> n is even x := x^{2}= 81^{2}= 3^{8}= 6561 n := n / 2 = 1**Iteration 4**n = 1 -> n is odd -> result := result * x = 3^{2}* 3^{8}= 3^{10}= 9 * 6561 = 59049 n := n - 1 = 0 return result**Runtime example: Compute 3**^{10}result := 3 bin := "1010"

**Iteration for digit 2:**result := result^{2}= 3^{2}= 9 1**0**10_{bin}- Digit equals "0"**Iteration for digit 3:**result := result^{2}= (3^{2})^{2}= 3^{4}= 81 10**1**0_{bin}- Digit equals "1" --> result := result*3 = (3^{2})^{2}*3 = 3^{5}= 243**Iteration for digit 4:**result := result^{2}= ((3^{2})^{2}*3)^{2}= 3^{10}= 59049 101**0**_{bin}- Digit equals "0" return resultJavaScript-Demonstration: http://home.arcor.de/wzwz.de/wiki/ebs/en.htm

**Generalization with example****Generalization**Let the pair (

**S**,*****) be aSemigroup , that means**S**is an arbitrary setand*****is anassociative binary operation on**S**:

* For all elements a and b of**S**is**a*b**also an element of**S**

* For all elements a, b and c of**S**is valid:**(a*b)*c**equals**a*(b*c)**We may call

*****a "multiplication" and define an "exponentiation"**E**in the following way:

For all elements a of**S**:

***E**( a, 1 ) := a

* For allnatural number s n > 0 is defined:**E**( a, n+1 ) :=**E**( a, n )*****aNow the algorithm exponentiation by squaring may be used for fast computing of

**E**-values.**Text application**Because the concatenation

**+**is an associative operation on the set of all finite strings over a fixed alphabet( with the empty string "" as itsidentity element ) exponentiation by squaring may be used for fast repeating of strings.Example using javascript:

The call

**repeat ( 'Abc', 6 )**returns the string**AbcAbcAbcAbcAbcAbc****Calculation of products of powers**Exponentiation by squaring may also be used to calculate the product of 2 or more powers.If the underlying group or semigroup is

commutative then it is often possible to reduce thenumber of multiplication by computing the product simultaneously.**Example**The formula a

^{7}×b^{5}may by calculated within 3 steps::((a)^{2}×a)^{2}×a (four multiplications for calculating a^{7}):((b)^{2})^{2}×b (three multiplications for calculating b^{5}): (a^{7})×(b^{5}) (one multiplication to calculate the product of the two)so one gets eight multiplications in total.A faster solution is to calculate both powers simultaneously::((a×b)

^{2}×a)^{2}×a×bwhich needs only 6 multiplications in total. Note that a×b is calculated twice, the result could be stored after the first calculation which reduces the count of multiplication to 5.Example with numbers::2

^{7}×3^{5}= ((2×3)^{2}×2)^{2}×2×3 = (6^{2}×2)^{2}×6 = 72^{2}×6 = 31,104Calculating the powers simultaneously instead of calculating them separately always reduces thecount of multiplications if at least two of the exponents are greater than 1.

**Using transformation**The example above a

^{7}×b^{5}may also be calculated with only 5multiplications if the expression is transformed before calculation:a

^{7}×b^{5}= a^{2}×(ab)^{5}with ab := a×b- ab := a×b (one multiplication)
- a
^{2}×(ab)^{5}= ((ab)^{2}×a)^{2}×ab (four multiplications)

Generalization of transformation shows the following scheme:

For calculating a^{A}×b^{B}×...×m^{M}×n^{N}

1st: define ab := a×b, abc = ab×c, ...

2nd: calculate the transformed expression a^{A-B}×ab^{B-C}×...×abc..m^{M-N}×abc..mn^{N}Transformation before calculation often reduces the count of multiplicationsbut in some cases it also increases the count (see the last one of the examples below),so it may be a good idea to check the count of multiplications before using the transformed expression for calculation.

**Examples**For the following expressions the count of multiplications is shown for calculating each power separately,calculating them simultaneously without transformation and calculating them simultaneously after transformation.

Example: a

^{7}×b^{5}×c^{3}

separate: [((a)^{2}×a)^{2}×a] × [((b)^{2})^{2}×b] × [(c)^{2}×c] (**11**multiplications )

simultaneous: ((a×b)^{2}×a×c)^{2}×a×b×c (**8**multiplications )

transformation: a := 2 ab := a×b abc := ab×c ( 2 multiplications )

calculation after that: (a×ab×abc)^{2}×abc ( 4 multiplications ⇒**6**in total )Example: a

^{5}×b^{5}×c^{3}

separate: [((a)^{2})^{2}×a] × [((b)^{2})^{2}×b] × [(c)^{2}×c] (**10**multiplications )

simultaneous: ((a×b)^{2}×c)^{2}×a×b×c (**7**multiplications )

transformation: a := 2 ab := a×b abc := ab×c ( 2 multiplications )

calculation after that: (ab×abc)^{2}×abc ( 3 multiplications ⇒**5**in total )Example: a

^{7}×b^{4}×c^{1}

separate: [((a)^{2}×a)^{2}×a] × [((b)^{2})^{2}] × [c] (**8**multiplications )

simultaneous: ((a×b)^{2}×a)^{2}×a×c (**6**multiplications )

transformation: a := 2 ab := a×b abc := ab×c ( 2 multiplications )

calculation after that: (a×ab)^{2}×a×ab×abc ( 5 multiplications ⇒**7**in total )**Implementation****Variation**From practical standpoint the following modification is also useful:

$mbox\{Power\}(x,,n)=\; egin\{cases\}\; 1,\; mbox\{if\; \}nmbox\{\; =\; 0\}\; \backslash \; x\; imes\; left(mbox\{Power\}(x,,frac\{n-1\}\{2\})\; ight)^2,\; mbox\{if\; \}nmbox\{\; is\; odd\}\; \backslash \; left(mbox\{Power\}(x,,frac\{n\}\{2\})\; ight)^2,\; mbox\{if\; \}nmbox\{\; is\; even\}end\{cases\}$

It is very similar in properties to aforementioned approach. While recursion is a natural way for calculation, it can be easily translated to iterative form. It also might provide some computational advantage (e.g., in case of small "x" and "n" = 3*2

^{"m"}) as well as memory consumption reduction.**Alternatives and generalizations**Exponentiation by squaring can be viewed as a suboptimal

addition-chain exponentiation algorithm: it computes the exponent via anaddition chain consisting of repeated exponent doublings (squarings) and/or incrementing exponents by "one" (multiplying by "x") only. More generally, if one allows "any" previously computed exponents to be summed (by multiplying those powers of "x"), one can sometimes perform the exponentiation using fewer multiplications (but typically using more memory). The smallest power where this occurs is for "n"=15::$a^\{15\}\; =\; x\; imes\; (x\; imes\; [x\; imes\; x^2]\; ^2)^2\; !$ (squaring, 6 multiplies):$a^\{15\}\; =\; x^3\; imes\; (\; [x^3]\; ^2)^2\; !$ (optimal addition chain, 5 multiplies if "x"

^{3}is re-used)In general, finding the "optimal" addition chain for a given exponent is a hard problem, for which no efficient algorithms are known, so optimal chains are typically only used for small exponents (e.g. in

compiler s where the chains for small powers have been pre-tabulated). However, there are a number ofheuristic algorithms that, while not being optimal, have fewer multiplications than exponentiation by squaring at the cost of additional bookkeeping work and memory usage. Regardless, the number of multiplications never grows more slowly than Θ(log "n"), so these algorithms only improve asymptotically upon exponentiation by squaring by a constant factor at best.

*Wikimedia Foundation.
2010.*