Root-finding algorithm

Root-finding algorithm

A root-finding algorithm is a numerical method, or algorithm, for finding a value x such that f(x) = 0, for a given function f. Such an x is called a root of the function f.

This article is concerned with finding scalar, real or complex roots, approximated as floating point numbers. Finding integer roots or exact algebraic roots are separate problems, whose algorithms have little in common with those discussed here. (See: Diophantine equation as for integer roots)

Finding a root of f(x)g(x) = 0 is the same as solving the equation f(x) = g(x). Here, x is called the unknown in the equation. Conversely, any equation can take the canonical form f(x) = 0, so equation solving is the same thing as computing (or finding) a root of a function.

Numerical root-finding methods use iteration, producing a sequence of numbers that hopefully converge towards a limit (the so called "fixed point") which is a root. The first values of this series are initial guesses. The method computes subsequent values based on the old ones and the function f.

The behaviour of root-finding algorithms is studied in numerical analysis. Algorithms perform best when they take advantage of known characteristics of the given function. Thus an algorithm to find isolated real roots of a low-degree polynomial in one variable may bear little resemblance to an algorithm for complex roots of a "black-box" function which is not even known to be differentiable. Questions include ability to separate close roots, robustness in achieving reliable answers despite inevitable numerical errors, and rate of convergence.

Contents

Specific algorithms

Bisection Method

The simplest root-finding algorithm is the bisection method. It works when f is a continuous function and it requires previous knowledge of two initial guesses, a and b, such that f(a) and f(b) have opposite signs. Although it is reliable, it converges slowly, gaining one bit of accuracy with each iteration.

Newton's Method (and similar derivative-based methods)

Newton's method assumes the function f to have a continuous derivative. Newton's method may not converge if started too far away from a root. However, when it does converge, it is faster than the bisection method, and is usually quadratic. Newton's method is also important because it readily generalizes to higher-dimensional problems. Newton-like methods with higher order of convergence are the Householder's methods. The first one after Newton's method is Halley's method with cubic order of convergence.

Secant Method

Replacing the derivative in Newton's method with a finite difference, we get the secant method. This method does not require the computation (nor the existence) of a derivative, but the price is slower convergence (the order is approximately 1.6). A generalization of the secant method in higher dimensions is Broyden's method.

False Position (Regula Falsi)

The false position method, also called the regula falsi method, is like the secant method. However, instead of retaining the last two points, it makes sure to keep one point on either side of the root. The false position method is faster than the bisection method and more robust than the secant method, but requires the two starting points to bracket the root. Ridders' method is a variant on the false-position method that also evaluates the function at the midpoint of the interval, giving faster convergence with similar robustness.

Interpolation

The secant method also arises if one approximates the unknown function f by linear interpolation. When quadratic interpolation is used instead, one arrives at Müller's method. It converges faster than the secant method. A particular feature of this method is that the iterates xn may become complex.

Inverse Interpolation

This can be avoided by interpolating the inverse of f, resulting in the inverse quadratic interpolation method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.

Combinations of Methods

Brent's Method

Brent's method is a combination of the bisection method, the secant method and inverse quadratic interpolation. At every iteration, Brent's method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity.

Finding roots of polynomials

Much attention has been given to the special case that the function f is a polynomial; there exist root-finding algorithms exploiting the polynomial nature of f. For a univariate polynomial of degree less than five, we have closed form solutions such as the quadratic formula. However, even this degree-two solution should be used with care to ensure numerical stability. The degree-four solution is unwieldy and troublesome. Higher-degree polynomials have no such general solution, according to the Abel–Ruffini theorem (1824, 1799).

Birge-Vieta's method combines Horner's method of polynomial evaluation with Newton-Raphson to provide a computational speed-up.

For real roots, Sturm's theorem and Descartes' rule of signs provide guides to locating and separating roots. This plus interval arithmetic combined with Newton's method yields robust algorithms. The algorithm for isolating the roots, using Descartes' rule of signs, has been called Uspensky's algorithm by its inventors Collins and Akritas.[1] It has been improved by Rouillier and Zimmerman.[2] It has the same worst case complexity as Sturm algorithm, but is almost always much faster. It is the default algorithm of Maple function fsolve.

One possibility is to form the companion matrix of the polynomial. Since the eigenvalues of this matrix coincide with the roots of the polynomial, one can use any eigenvalue algorithm to find the roots of the polynomial. For instance the classical Bernoulli's method to find the root greater in modulus, if it exists, turns out to be the power method applied to the companion matrix. The inverse power method, which finds some smallest root first, is what drives the Jenkins–Traub method and gives it its numerical stability and fast convergence even in the presence of multiple or clustered roots.

Laguerre's method, as well as Halley's method, use second order derivatives and complex arithmetics, including the complex square root, to exhibit cubic convergence for simple roots, dominating the quadratic convergence displayed by Newton's method.

Bairstow's method uses Newton's method to find quadratic factors of a polynomial with real coefficients. It can determine both real and complex roots of a real polynomial using only real arithmetic.

The simple Durand–Kerner and the slightly more complicated Aberth method simultaneously finds all the roots using only simple complex number arithmetic.

The splitting circle method is useful for finding the roots of polynomials of high degree to arbitrary precision; it has almost optimal complexity in this setting. Another method with this style is the Dandelin–Gräffe method (actually due to Lobachevsky) which factors the polynomial.

Wilkinson's polynomial illustrates that high precision may be necessary when computing the roots of a polynomial given its coefficients: the problem of finding the roots from the coefficients is in general ill-conditioned.

Finding multiple roots of polynomials

If p(x) is a polynomial with a multiple root at r, then finding the value of r can be difficult (inefficient or impossible) for many of the standard root-finding algorithms. Fortunately, there is a technique especially for this case, provided that p is given explicitly as a polynomial in one variable with exact coefficients.

Algorithm

  1. First, we need to determine whether p(x) has a multiple root. If p(x) has a multiple root at r, then its derivative p′(x) will also have a root at r (one fewer than p(x) has there). So we calculate the greatest common divisor of the polynomials p(x) and p′(x); adjust the leading coefficient to be one and call it g(x). (See Sturm's theorem.) If g(x) = 1, then p(x) has no multiple roots and we can safely use those other root-finding algorithms which work best when there are no multiple roots, and then exit.
  2. Now suppose that there is a multiple root. Notice that g(x) will have a root of the same multiplicity at r that p′(x) has and the degree of the polynomial g(x) will generally be much less than that of p(x). Recursively call this routine, i.e. go back to step #1 above, using g(x) in place of p(x). Now suppose that we have found the roots of g(x), i.e. we have factored it.
  3. Since r has been found, we can factor (xr) out of p(x) repeatedly until we have removed all of the roots at r. Repeat this for any other multiple roots until there are no more multiple roots. Then the quotient, i.e. the remaining part of p(x), can be factored in the usual way with one of the other root-finding algorithms. Exit.

Example

Suppose p(x) = x3+x2−5x+3 is the function whose roots we want to find. We calculate p′(x) = 3x2+2x−5. Now divide p′(x) into p(x) to get p(x) = p′(x)·((1/3)x+(1/9))+((−32/9)x+(32/9)). Divide the remainder by −32/9 to get x−1 which is monic. Divide x−1 into p′(x) to get p′(x) = (x−1)·(3x+5)+0. Since the remainder is zero, g(x) = x−1. So the multiple root of p(x) is r = 1. Dividing p(x) by (x−1)2, we get p(x) = (x+3)(x−1)2 so the other root is −3, a single root.

Direct algorithm for multiple root elimination

There is a direct method of eliminating multiple (or repeated) roots from polynomials with exact coefficients (integers, rational numbers, Gaussian integers or rational complex numbers).

Suppose a is a root of polynomial P, with multiplicity m>1. Then a will be a root of the formal derivative P ’, with multiplicity m−1. However, P ’ may have additional roots that are not roots of P. For example, if P(x)=(x−1)3(x−3)3, then P ’(x)=6(x−1)2(x−2)(x−3)2. So 2 is a root of P ’, but not of P.

Define G to be the greatest common divisor of P and P ’. (Say, G(x) = (x−1)2(x−3)2).

Finally, G divides P exactly, so form the quotient Q =P/G. (Say, Q(x)= (x−1)(x−3) ). The roots of Q are the roots of P, with multiple roots reduced down to single roots.

As P is a polynomial with exact coefficients, then if the algorithm is executed using exact arithmetic, Q will also be a polynomial with exact coefficients.

Obviously degree(Q) < degree(P). If degree(Q) ≤ 4 then the multiple roots of P may be found algebraically. It is then possible to determine the multiplicities of those roots in P algebraically.

Iterative root-finding algorithms perform well in the absence of multiple roots.

See also

References

  1. ^ George E. Collins and Alkiviadis G. Akritas, Polynomial Real Root Isolation Using Descarte's Rule of Signs, Proceedings of the 1976 ACM Symposium on Symbolic and Algebraic Computation
  2. ^ F. Rouillier and P. Zimmerman, Efficient isolation of polynomial's real roots, Journal of Computational and Applied Mathematics 162 (2004)

External links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Nth root — In mathematics, an n th root of a number a is a number b such that bn = a . When referring to the n th root of a real number a it is assumed that what is desired is the principal n th root of the number, which is denoted sqrt [n] {a} using the… …   Wikipedia

  • Ziggurat algorithm — The ziggurat algorithm is an algorithm to generate random numbers from a non uniform distribution. It belongs to the class of rejection sampling algorithms and can be used for choosing values from a monotone decreasing probability distribution.… …   Wikipedia

  • nth root — In mathematics, the nth root of a number x is a number r which, when raised to the power of n, equals x rn = x, where n is the degree of the root. A root of degree 2 is usually called a square root and a root of degree 3, a cube root. For example …   Wikipedia

  • Lehmer-Schur algorithm — In mathematics, the Lehmer Schur algorithm is a root finding algorithm extending the one dimensional bracketing used by the bisection method to find the roots of a function of one complex variable inside any rectangular region of the function s… …   Wikipedia

  • Algorithm — Flow chart of an algorithm (Euclid s algorithm) for calculating the greatest common divisor (g.c.d.) of two numbers a and b in locations named A and B. The algorithm proceeds by successive subtractions in two loops: IF the test B ≤ A yields yes… …   Wikipedia

  • Root mean square deviation (bioinformatics) — The root mean square deviation (RMSD) is the measure of the average distance between the backbones of superimposed proteins. In the study of globular protein conformations, one customarily measures the similarity in three dimensional structure by …   Wikipedia

  • Shor's algorithm — Shor s algorithm, first introduced by mathematician Peter Shor, is a quantum algorithm for integer factorization. On a quantum computer, to factor an integer N, Shor s algorithm takes polynomial time in log{N}, specifically O((log{N})^3),… …   Wikipedia

  • nth root algorithm — The principal nth root of a positive real number A, is the positive real solution of the equation xn = A (for integer n there are n distinct complex solutions to this equation if A > 0, but only one is positive and real). There is a very fast… …   Wikipedia

  • Eigenvalue algorithm — In linear algebra, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors. Contents 1 Characteristic polynomial 2 Power… …   Wikipedia

  • Divide-and-conquer eigenvalue algorithm — Divide and conquer eigenvalue algorithms are a class of eigenvalue algorithms for Hermitian or real symmetric matrices that have recently (circa 1990s) become competitive in terms of stability and efficiency with more traditional algorithms such… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”