Numerical smoothing and differentiation

Numerical smoothing and differentiation

An experimental datum value can be conceptually described as the sum of a signal and some noise, but in practice the two contributions cannot be separated. The purpose of smoothing is to increase the Signal-to-noise ratio without greatly distorting the signal (i.e. to get rid of the noise). One way to achieve this is by fitting successive sets of m data points to a polynomial of degree less than m by the method of linear least squares. Once the coefficients of the smoothing polynomial have been calculated they can be used to give estimates of the signal or its derivatives.

Contents

Convolution coefficients

When the data points are equally spaced a relatively simple analytical solution to the least-squares equations can be found. This solution forms the basis of the convolution method of numerical smoothing and differentiation.

Suppose that the data consists of a set of n {xi, yi} points (i = 1...n), where x is an independent variable and yi is an observed value. A polynomial will be fitted to a set of m (an odd number) adjacent data points, each separated by an interval h. Firstly, a change of variable is made

z = {{x - \bar x} \over h}

where {\bar x} is the value of the central point. z takes the values (1-m)/2 .. 0 .. (m-1)/2 (e.g. m = 5 → z = [-2,-1,0,1,2]). The polynomial, of degree k is defined as

Y = a_0  + a_1 z + a_2 z^2  \cdots  + a_k z^k.

The coefficients a0, a1 etc. are obtained by solving the normal equations

{\mathbf{a}} = \left( {{\mathbf{J}}^{\mathbf{T}} {\mathbf{J}}} \right)^{ - {\mathbf{1}}} {\mathbf{J}}^{\mathbf{T}} {\mathbf{y}}

where the ith row of the Jacobian matrix J has the values {1 zi zi2zik}. For example, for a quadratic polynomial fitted to 5 points


{\mathbf{J^T}} = 
\begin{pmatrix}   1 & 1 & 1 & 1 & 1  \\ 
   { - 2} & { - 1} & 0 & 1 & 2  \\ 
   4 & 1 & 0 & 1 & 4  \\
\end{pmatrix}

\begin{pmatrix} {a_0 }  \\ {a_1 }  \\  {a_2 }  \\\end{pmatrix}=
\begin{pmatrix}5 & 0 & 10\\ 0 & 10 & 0 \\ 10 & 0 & 34 \\
\end{pmatrix}^{-1}
\begin{pmatrix}   1 & 1 & 1 & 1 & 1  \\ 
   { - 2} & { - 1} & 0 & 1 & 2  \\ 
   4 & 1 & 0 & 1 & 4  \\
\end{pmatrix}
\begin{pmatrix}{y_1 }\\{y_2 }\\{y_3 }\\{y_4 }\\{y_5 }\\\end{pmatrix}

\begin{pmatrix} {a_0 }  \\ {a_1 }  \\  {a_2 }  \\\end{pmatrix}=
\begin{pmatrix}
   { - 3/35} & {12/35} & {17/35} & {12/35} & { - 3/35}  \\ 
   { - 2/10} & { - 1/10} & 0 & {1/10} & {2/10}  \\ 
   {2/14} & { - 1/14} & { - 2/14} & { - 1/14} & {2/14}  \\
 \end{pmatrix}
 \begin{pmatrix}{y_1 }  \\ {y_2 }  \\ {y_3 }  \\ {y_4 }  \\ {y_5 }  \\\end{pmatrix}.

In this example, a0 = ( − 3y1 + 12y2 + 17y3 + 12y4 − 3y5) / 35. This is the smoothed value for the central point (z = 0) of the five data points used in the calculation. The coefficients (-3 12 17 12 -3)/35 are known as convolution coefficients as they are applied in succession to sets of m points at a time.

Y_j=\sum{C_{-i}y_{j-i} ...+ C_{0}y_j ... +C_iy_{j+i}}{;  i=(m-1)/2}

Tables of convolution coefficients were published by Savitzky and Golay in 1964, though the procedure for calculating them was known in the 19th century (See E. T. Whittaker and G. Robinson, The Calculus of Observations)

The numerical derivatives are obtained by differentiating Y. For a cubic polynomial


\frac{{dY}}
{{dx}} = \frac{1}
{h}\left( {a_1  + 2a_2 z + 3a_3 z^2 } \right) = \frac{1}
{h}a_1 {\text{ at }}z = {\text{0}}

\frac{{d^2 Y}}
{{dx^2 }} = \frac{1}
{{h^2 }}\left( {2a_2  + 6a_3 z} \right) = \frac{2}
{h^2}a_2 {\text{ at }}z = {\text{0 }}

\frac{{d^3 Y}}
{{dx^3 }} = \frac{6}
{{h^3 }}a_3 {\text{ }}.

It is not necessary always to use the Savitzky-Golay tables as algebraic formulae can be derived for the convolution coefficients. For a cubic polynomial the expressions are


C_{0j}  = \frac{
{\left( {3m^2  - 7 - 20j^2 } \right)/4}}
{{m\left( {m^2  - 4} \right)/3}}

C_{1j}  = \frac{
{5\left( {3m^4  - 18m^2  + 31} \right)j - 28\left( {3m^2  - 7} \right)j^3 }}
{{m\left( {m^2  - 1} \right)\left( {3m^4  - 39m^2  + 108} \right)/15}}

C_{2j}  = \frac{{12mj^2  - m\left( {m^2  - 1} \right)}}
{{m^2\left({m^2 - 1} \right) \left( {m^2  - 4} \right)/15}}

C_{3j}  = \frac{{ - \left( {3m^2  - 7} \right)j + 20j^3 }}
{{m\left( {m^2  - 1} \right)\left( {3m^4  - 39m^2  + 108} \right)/420}}.

Signal distortion and noise reduction

It is inevitable that the signal will be distorted in the convolution process. Both the extent of the distortion and signal-to-noise improvement:

  • decrease as the degree of the polynomial increases
  • increase as the width, m of the convolution function increases

For example, If the noise in all data points has a constant Standard deviation, σ, when smoothing by a m-point linear polynomial the standard deviation on the noise will be decreased to \sqrt{1\over{m}}  \sigma, but with a quadratic polynomial it reduces to approximately \sqrt{9\over{4m}}  \sigma. So, for a 9-point quadratic smooth only half the noise is removed.

Frequency characteristics of convolution filters

Convolution maps to multiplication in the Fourier co-domain (see pseudocode below). The (finite) Fourier transform of a convolution filter shows that it is most efficient for high-frequency noise and can therefore be described as a low-pass filter. The noise that is not removed is primarily low-frequency noise.

### Smooth the vector x[1,...,nx] with an exponentially damped kernel.  The
###  result is a vector "smooth" with indeterminate values at the edges, and
###  smoothed values in between
cutoff = 0.05   ### weights to zero below this value
alpha = 1.8  ### Decay of weight with distance from center
logacutoff = log(cutoff)/log(alpha)   ### log base alpha of cutoff
span = floor(-logacutoff )  ### width to left and right
weights = alpha^(-abs(sequence(left=-span, right=span, step=1)))  ### Overloaded "^"
kernel = weights / sum(weights)  ### Overloaded "/"
nx = length(x)
nk = 2*span+1   ### length(kernel)
assert( nx>nk )
x1 = concatenate( sequence(0,length=ny-1),  x )
k1 = concatenate( kernel, sequence(0,length=nx-1) )
s1 = inverse_fft( fft(x1) * fft(k1) )  ### Overloaded "*"
smooth = sequence(NaN, length=nx)
smooth[1+span:nx-span] = s1[ ny+nk-1 : nx+nk-1 ]  ### using 1 offset notation

Applications

  • Smoothing by convolution is performed primarily for aesthetic reasons. Fitting statistical models to smoothed data is generally a mistake, since the smoothing process alters the distribution of noise.
  • Location of maxima and minima in experimental data curves. The first derivative of a function is zero at a maximum or minimum.
  • Location of an end-point in a titration curve. An end-point is an inflection point where the second derivative of the function is zero.
  • Resolution enhancement in spectroscopy. Bands in the second derivative of a spectroscopic curve are narrower than the bands in the spectrum: they have reduced Half-width. This allows partially overlapping bands to be "resolved" into separate peaks.

See also

References

  • P. Gans, Data fitting in the chemical sciences, Wiley, 1992.

External links


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Numerical differentiation — is a technique of numerical analysis to produce an estimate of the derivative of a mathematical function or function subroutine using values from the function and perhaps other knowledge about the function. Contents 1 Finite difference formulae 1 …   Wikipedia

  • Smoothing — In statistics and image processing, to smooth a data set is to create an approximating function that attempts to capture important patterns in the data, while leaving out noise or other fine scale structures/rapid phenomena. Many different… …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • Savitzky–Golay smoothing filter — The Savitzky–Golay smoothing filter is a type of filter first described in 1964 by Abraham Savitzky and Marcel J. E. Golay. [A. Savitzky and Marcel J.E. Golay (1964). Smoothing and Differentiation of Data by Simplified Least Squares Procedures .… …   Wikipedia

  • Errors and residuals in statistics — For other senses of the word residual , see Residual. In statistics and optimization, statistical errors and residuals are two closely related and easily confused measures of the deviation of a sample from its theoretical value . The error of a… …   Wikipedia

  • Mean and predicted response — In linear regression mean response and predicted response are values of the dependent variable calculated from the regression parameters and a given value of the independent variable. The values of these two responses are the same, but their… …   Wikipedia

  • Linear least squares (mathematics) — This article is about the mathematics that underlie curve fitting using linear least squares. For statistical regression analysis using least squares, see linear regression. For linear regression on a single variable, see simple linear regression …   Wikipedia

  • Linear least squares — is an important computational problem, that arises primarily in applications when it is desired to fit a linear mathematical model to measurements obtained from experiments. The goals of linear least squares are to extract predictions from the… …   Wikipedia

  • Linear least squares/Proposed — Linear least squares is an important computational problem, that arises primarily in applications when it is desired to fit a linear mathematical model to observations obtained from experiments. Mathematically, it can be stated as the problem of… …   Wikipedia

  • Non-linear least squares — is the form of least squares analysis which is used to fit a set of m observations with a model that is non linear in n unknown parameters (m > n). It is used in some forms of non linear regression. The basis of the method is to… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”