# Remez algorithm

﻿
Remez algorithm

The Remez algorithm (sometimes also called Remes algorithm, Remez/Remes exchange algorithm), published by Evgeny Yakovlevich Remez in 1934 [E. Ya. Remez, "Sur la détermination des polynômes d'approximation de degré donnée", Comm. Soc. Math. Kharkov 10, 41 (1934);
"Sur un procédé convergent d'approximations successives pour déterminer les polynômes d'approximation, Compt. Rend. Acad. Sc. 198, 2063 (1934);
"Sur le calcul effectiv des polynômes d'approximation des Tschebyscheff", Compt. Rend. Acade. Sc. 199, 337 (1934).
] is an iterative algorithm for finding the best approximation in the uniform norm "L" in the Chebyshev space. A typical example of a Chebyshev space is the subspace of polynomials of order "n" in the space of real continuous functions on an interval, "C" ["a", "b"] .

The polynomial of best approximation of a given degree is defined to be the one that minimizes the maximum absolute difference between the polynomial and the function.

Procedure

The Remez algorithm starts with a set of $n + 2$ sample points $X$ in the approximation interval, usually the Chebyshev nodes linearly mapped to the interval.

# Set up a system of equations where each equation is of the form $b_0 + b_1 x ... b_n x ^ n + \left(-1\right) ^ i E = f\left(x_i\right)$ for some $x_i in X$, and solve the system for the unknowns ($b_0, b_1...b_n, E$).
# Use the $b_i$s to create a polynomial $P_n$.
# Find the points of local maximum error ($M$) given by $|P_n\left(x\right) - f\left(x\right)|$.
# If every $m in M$ is of equal magnitude and alternates, $P_n$ is the minimax approximation polynomial. If not, replace X with M and start over.

The result is called the polynomial of best approximation, the Chebyshev approximation, or the minimax approximation.

A review of technicalities in implementing the Remez algorithm is given by W. Fraser [ [http://doi.acm.org/10.1145/321281.321282 W. Fraser, "A Survey of Methods of Computing Minimax and Near-Minimax Polynomial Approximations for Functions of a Single Independent Variable", J. ACM 12, 295 (1965).] ] .

On the choice of initialization

The reason for the Chebyshev nodes being a common choice for the initial approximation is in its behavior in the theory of polynomial interpolation.

For the initialization of the optimization problem for function "f" by the Lagrange interpolant "L"n("f"), it can be shown that this initial approximation is bounded by

:$lVert f - L_n\left(f\right) Vert_infty le \left(1 + lVert L_n Vert_infty\right) inf_\left\{p in P_n\right\} lVert f - p Vert$

with the norm or Lebesgue constant of the Lagrange interpolation operator "L""n" of the nodes ("t"1, ..., "t""n" + 1) being

:$lVert L_n Vert_infty = overline\left\{Lambda\right\}_n\left(T\right) = max_\left\{-1 le x le 1\right\} lambda_n\left(T; x\right),$

"T" being the zeros of the Chebyshev polynomials, and the Lebesgue functions being

:$lambda_n\left(T; x\right) = sum_\left\{j = 1\right\}^\left\{n + 1\right\} left| l_j\left(x\right) ight|, quad l_j\left(x\right) = prod_\left\{stackrel\left\{i = 1\right\}\left\{i e j^\left\{n + 1\right\} frac\left\{\left(x - t_i\right)\right\}\left\{\left(t_j - t_i\right)\right\}$

Theodore A. Kilgore [ [http://dx.doi.org/10.1016/0021-9045(78)90013-8 T. A. Kilgore, "A characterization of the Lagrange interpolating projection with minimal Tchebycheff norm", J. Approx. Theory 24, 273 (1978).] ] , Carl de Boor, and Allan Pinkus [ [http://dx.doi.org/10.1016/0021-9045(78)90014-X C. de Boor and A. Pinkus, "Proof of the conjectures of Bernstein and Erdös concerning the optimal nodes for polynomial interpolation", J. Approx. Theory 24, 289 (1978).] ] proved that there exists a unique "t""i" for each "L""n", although not known explicitly for (ordinary) polynomials. Similarly, $underline\left\{Lambda\right\}_n\left(T\right) = min_\left\{-1 le x le 1\right\} lambda_n\left(T; x\right)$, and the optimality of a choice of nodes can be expressed as $overline\left\{Lambda\right\}_n - underline\left\{Lambda\right\}_n ge 0.$

For Chebyshev nodes, which provides a suboptimal, but analytically explicit choice, the asymptotic behavior is known as [F. W. Luttmann and T. J. Rivlin, "Some numerical experiments in the theory of polynomial interpolation", IBM J. Res. Develop. 9, 187 (1965).]

:$overline\left\{Lambda\right\}_n\left(T\right) = frac\left\{2\right\}\left\{pi\right\} log\left(n + 1\right) + frac\left\{2\right\}\left\{pi\right\}left\left(gamma + logfrac\left\{8\right\}\left\{pi\right\} ight\right) + alpha_\left\{n + 1\right\}$

("γ" being the Euler-Mascheroni constant) with

:$0 < alpha_n < frac\left\{pi\right\}\left\{72 n^2\right\}$ for $n ge 1,$

and upper bound [T. Rivlin, "The Lebesgue constants for polynomial interpolation", in "Proceedings of the Int. Conf. on Functional Analysis and Its Application", edited by H. G. Garnier "et al." (Springer-Verlag, Berlin, 1974), p. 422; "The Chebyshev polynomials" (Wiley-Interscience, New York, 1974).]

:$overline\left\{Lambda\right\}_n\left(T\right) le frac\left\{2\right\}\left\{pi\right\} log\left(n + 1\right) + 1$

Lev Brutman [ [http://dx.doi.org/10.1137/0715046 L. Brutman, "On the Lebesgue Function for Polynomial Interpolation", SIAM J. Numer. Anal. 15, 694 (1978).] ] obtained the bound for $n ge 3$, and $hat\left\{T\right\}$ being the zeros of the expanded Chebyshev polynomials:

:$overline\left\{Lambda\right\}_n\left(hat\left\{T\right\}\right) - underline\left\{Lambda\right\}_n\left(hat\left\{T\right\}\right) < overline\left\{Lambda\right\}_3 - frac\left\{1\right\}\left\{6\right\} cot frac\left\{pi\right\}\left\{8\right\} + frac\left\{pi\right\}\left\{64\right\} frac\left\{1\right\}\left\{sin^2\left(3pi/16\right)\right\} - frac\left\{2\right\}\left\{pi\right\}\left(gamma - logpi\right)approx 0.201.$

Rüdiger Günttner [ [http://dx.doi.org/10.1137/0717043 R. Günttner, "Evaluation of Lebesgue Constants", SIAM J. Numer. Anal. 17, 512 (1980).] ] obtained from a sharper estimate for $n ge 40$

:$overline\left\{Lambda\right\}_n\left(hat\left\{T\right\}\right) - underline\left\{Lambda\right\}_n\left(hat\left\{T\right\}\right) < 0.0196.$

Variants

Sometimes more than one sample point is replaced at the same time with the locations of nearby maximum absolute differences.

Sometimes relative error is used to measure the difference between the approximation and the function, especially if the approximation will be used to compute the function on a computer which uses floating-point arithmetic.

References

ee also

* Approximation theory

* [http://www.bores.com/courses/intro/filters/4_equi.htm Intro to DSP]
* [http://mathworld.wolfram.com/RemezAlgorithm.html Remez Algorithm from MathWorld]

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• Evgeny Yakovlevich Remez — (sometimes spelled as Evgenii Yakovlevich Remez , ru. Евгений Яковлевич Ремез) (1896 1975) was a Soviet mathematician. He is known for Remez algorithm.External links* [http://www.genealogy.math.ndsu.nodak.edu/html/id.phtml?id=76952 Remez at… …   Wikipedia

• Parks-McClellan filter design algorithm — The Parks McClellan filter design algorithm is a digital signal processing algorithm developed by James H. McClellan (now at Georgia Tech) and Thomas W. Parks (now at Cornell University) while McClellan was a graduate student working with Parks… …   Wikipedia

• Minimax approximation algorithm — Polynomial expansions such as the Taylor series expansion are often convenient for theoretical work but less useful for practical applications. For practical work it is often desirable to minimize the maximum absolute or relative error of a… …   Wikipedia

• Approximation theory — In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. Note that what is meant by best and simpler will depend on …   Wikipedia

• List of mathematics articles (R) — NOTOC R R. A. Fisher Lectureship Rabdology Rabin automaton Rabin signature algorithm Rabinovich Fabrikant equations Rabinowitsch trick Racah polynomials Racah W coefficient Racetrack (game) Racks and quandles Radar chart Rademacher complexity… …   Wikipedia

• List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

• Sample rate conversion — is the process of converting a (usually digital) signal from one sampling rate to another, while changing the information carried by the signal as little as possible. When applied to an image, this process is sometimes called image scaling.Sample …   Wikipedia

• Filter design — is the process of designing a filter (in the sense in which the term is used in signal processing, statistics, and applied mathematics), often a linear shift invariant filter, which satisfies a set of requirements, some of which are contradictory …   Wikipedia

• COMPUTER SCIENCE — The term Computer Science encompasses three different types of research areas: computability, efficiency, and methodology. General Introduction Computability deals with the question of what is mechanically computable. The most natural way to… …   Encyclopedia of Judaism

• Finite impulse response — A finite impulse response (FIR) filter is a type of a digital filter. The impulse response, the filter s response to a Kronecker delta input, is finite because it settles to zero in a finite number of sample intervals. This is in contrast to… …   Wikipedia