Numerical differentiation

Numerical differentiation

Numerical differentiation is a technique of numerical analysis to produce an estimate of the derivative of a mathematical function or function subroutine using values from the function and perhaps other knowledge about the function.

Derivative.png

Contents

Finite difference formulae

The simplest method is to use finite difference approximations.

A simple two-point estimation is to compute the slope of a nearby secant line through the points (x,f(x)) and (x+h,f(x+h)).[1] Choosing a small number h, h represents a small change in x, and it can be either positive or negative. The slope of this line is

{f(x+h)-f(x)\over h}.

This expression is Newton's difference quotient.

The slope of this secant line differs from the slope of the tangent line by an amount that is approximately proportional to h. As h approaches zero, the slope of the secant line approaches the slope of the tangent line. Therefore, the true derivative of f at x is the limit of the value of the difference quotient as the secant lines get closer and closer to being a tangent line:

f'(x)=\lim_{h\to 0}{f(x+h)-f(x)\over h}.

Since immediately substituting 0 for h results in division by zero, calculating the derivative directly can be unintuitive.

A simple three-point estimation is to compute the slope of a nearby secant line through the points (x-h,f(x-h)) and (x+h,f(x+h)). The slope of this line is

{f(x+h)-f(x-h)\over 2h}.

More generally, three-point estimation uses the secant line through the points (xh1,f(xh1)) and (x + h2,f(x + h2)). The slope of this line is

{f(x+h_2)-f(x-h_1)\over {h_1+h_2}}.

The slope of these secant lines differ from the slope of the tangent line by an amount that is approximately proportional to h2 so that three-point estimation is a more accurate approximation to the tangent line than two-point estimation when h is small.

For the three-point estimation method, the estimation error is given by:

R = {{-f^{(3)}(c)}\over {6}}h^2,

where c is some point between xh and x + h. This error does not include the rounding error.

Practical considerations using floating point arithmetic

Example showing the difficulty of choosing h due to both rounding error and formula error

An important consideration in practice when the function is approximated using floating point arithmetic is how small a value of h to choose. If chosen too small, the subtraction will yield a large rounding error and in fact all the finite difference formulae are ill-conditioned[2] and due to cancellation will produce a value of zero if h is small enough.[3] If too large, the calculation of the slope of the secant line will be more accurate, but the estimate of the slope of the tangent by using the secant could be worse.

As discussed in Chapter 5.7 of Numerical Recipes in C (http://www.nrbook.com/a/bookcpdf/c5-7.pdf), a suitable choice for h is \sqrt{\varepsilon}x where the machine epsilon ε is typically of the order 2.2×10-16. Another important consideration is to make sure that h and 0+h are representable in floating point precision so that the difference between x+h and x is exactly h. This can be accomplished by placing their values into and out of memory as follows: h \leftarrow \sqrt{\varepsilon} x, \xi \leftarrow x+h and h \leftarrow \xi-x. It may be necessary to declare ξ as a volatile variable so the steps are not undone by compiler optimization.

Higher order methods

Higher order methods for approximating the derivative, as well as methods for higher derivatives exist.

Given below is the five point method for the first derivative (five-point stencil in one dimension).

f'(x) = \frac{-f(x+2 h)+8 f(x+h)-8 f(x-h)+f(x-2h)}{12 h}+\frac{h^4}{30}f^{(5)}(c)

where c\in[x-2h,x+2h].

More information about those methods can be found in the article about Finite difference methods, while a large set of numerical coefficients for those methods are tabulated in the article on Finite difference coefficient.

Differential quadrature

Differential quadrature is the approximation of derivatives by using weighted sums of function values.[4][5] The name is in analogy with quadrature meaning Numerical integration where weighted sums are used in methods such Simpson's method or the Trapezium rule. There are various methods for determining the weight coefficients. Differential quadrature is used to solve partial differential equations.

Complex variable methods

The classical finite difference approximations for numerical differentiation are ill-conditioned. However, if f is a holomorphic function, real-valued on the real line, which can be evaluated at points in the complex plane near x then there are stable methods. For example,[3] the first derivative can be calculated by the complex-step derivative formula:[6]

f'(x)\approx Im(f(x + ih))/h.

In general, derivatives of any order can be calculated by the virtue of Cauchy's integral formula:

f^{(n)}(a) = {n! \over 2\pi i} \oint_\gamma {f(z) \over (z-a)^{n+1}}\, \mathrm{d}z,

where the integration is done numerically.

Using complex variables for numerical differentiation was started by Lyness and Moler in 1967.[7] A method based on numerical inversion of a complex Laplace transform was developed by Abate and Dubner.[8] An algorithm which can be used without requiring knowledge about the method or the character of the function was developed by Fornberg.[2]

See also

Notes

  1. ^ Richard L. Burden, J. Douglas Faires (2000), Numerical Analysis, (7th Ed), Brooks/Cole. ISBN 0-534-38216-9
  2. ^ a b Numerical Differentiation of Analytic Functions, B Fornberg - ACM Transactions on Mathematical Software (TOMS), 1981
  3. ^ a b Using Complex Variables to Estimate Derivatives of Real Functions, W Squire, G Trapp - SIAM REVIEW, 1998
  4. ^ Differential Quadrature and Its Application in Engineering: Engineering Applications, Chang Shu, Springer, 2000, ISBN 978-1-85233-209-9
  5. ^ Advanced Differential Quadrature Methods, Yingyan Zhang, CRC Press, 2009, ISBN 978-1-4200-8248-7
  6. ^ Martins, JRRA; P Sturdza & JJ Alonso (2003) The Complex-Step Derivative Approximation ACM Transactions on Mathematical Software 29(3):245–262.
  7. ^ J. N. Lyness AND C. B. Moler, Numerical differentiation of analytic functions, SIAM J.Numer. Anal., 4 (1967), pp. 202-210.
  8. ^ A New Method for Generating Power Series Expansions of Functions, J Abate, H Dubner, SIAM J. Numer. Anal. Volume 5, Issue 1, pp. 102-112 (March 1968)

External links


Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Numerical analysis — Babylonian clay tablet BC 7289 (c. 1800–1600 BC) with annotations. The approximation of the square root of 2 is four sexagesimal figures, which is about six decimal figures. 1 + 24/60 + 51/602 + 10/603 = 1.41421296...[1] Numerical analysis is the …   Wikipedia

  • Numerical smoothing and differentiation — An experimental datum value can be conceptually described as the sum of a signal and some noise, but in practice the two contributions cannot be separated. The purpose of smoothing is to increase the Signal to noise ratio without greatly… …   Wikipedia

  • Numerical ordinary differential equations — Illustration of numerical integration for the differential equation y = y,y(0) = 1. Blue: the Euler method, green: the midpoint method, red: the exact solution, y = et. The step size is h = 1.0 …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • Automatic differentiation — In mathematics and computer algebra, automatic differentiation, or AD, sometimes alternatively called algorithmic differentiation, is a method to numerically evaluate the derivative of a function specified by a computer program. Two classical… …   Wikipedia

  • NAG Numerical Libraries — is a software product sold by The Numerical Algorithms Group Ltd (a non profit company). The product is a software library of numerical analysis routines. It comprises a collection of 1500 mathematical and statistical algorithms. Areas covered… …   Wikipedia

  • Comparison of numerical analysis software — This list is incomplete; you can help by expanding it. The following tables provide a comparison of numerical analysis software. Contents 1 Applications 1.1 General …   Wikipedia

  • J. H. Wilkinson Prize for Numerical Software — The J. H. Wilkinson Prize for Numerical Software is awarded every four years to honour outstanding contributions to the field of numerical software. Overview In honour of the outstanding contributions of Dr. James Hardy Wilkinson to the field of… …   Wikipedia

  • Backward differentiation formula — The backward differentiation formula (BDF) is a family of implicit methods for the numerical integration of ordinary differential equations. They are linear multistep methods that, for a given function and time, approximate the derivative that… …   Wikipedia

  • List of mathematics articles (N) — NOTOC N N body problem N category N category number N connected space N dimensional sequential move puzzles N dimensional space N huge cardinal N jet N Mahlo cardinal N monoid N player game N set N skeleton N sphere N! conjecture Nabla symbol… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”