Gauss–Newton algorithm

Gauss–Newton algorithm

The Gauss–Newton algorithm is a method used to solve non-linear least squares problems. It can be seen as a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.

Non-linear least squares problems arise for instance in non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations.

The method is named after the mathematicians Carl Friedrich Gauss and Isaac Newton.

Contents

Description

Given m functions r1, …, rm of n variables β = (β1, …, βn), with m ≥ n, the Gauss–Newton algorithm finds the minimum of the sum of squares[1]

 S(\boldsymbol \beta)= \sum_{i=1}^m r_i^2(\boldsymbol \beta).

Starting with an initial guess \boldsymbol \beta^{(0)} for the minimum, the method proceeds by the iterations

 \boldsymbol \beta^{(s+1)} = \boldsymbol
\beta^{(s)}+\Delta,

where Δ is a small step. We then have

S(\boldsymbol \beta^{(s)} + \Delta) = S(\boldsymbol \beta^{(s)}) + \left[\frac{\partial S}{\partial \beta_i}\right] \Delta + \frac{1}{2} \Delta^\top \left[\frac{\partial^2 S(\beta)}{\partial \beta_i\partial \beta_j}\right] \Delta.

If we define the Jacobian matrix

 \mathbf{J_r}(\boldsymbol \beta) = \left.\frac{\partial r_i}{\partial \beta_j}\right|_{\boldsymbol \beta},

we can replace

\left[\frac{\partial S}{\partial \beta_i}\right] with \mathbf{J_r}^\top \mathbf{r}

and the Hessian matrix in the right can be approximated by \mathbf{J_r}^\top \mathbf{J_r} (assuming small residual), giving:

S(\boldsymbol \beta^{(s)} + \Delta) \approx S(\boldsymbol \beta^{(s)}) + \mathbf{J_r}^\top \mathbf{r}\Delta + \frac{1}{2} \Delta^\top \mathbf{J_r}^\top \mathbf{J_r}\Delta.

We then take the derivative with respect to Δ and set it equal to zero to find a solution:

S'(\boldsymbol \beta^{(s)} + \Delta) \approx \mathbf{J_r}^\top \mathbf{r} + \mathbf{J_r}^\top  \mathbf{J_r} \Delta = 0.

This can be rearranged to give the normal equations which can be solved for Δ:

\left(\mathbf{J_r}^\top \mathbf{J_r} \right)\Delta = - \mathbf{ J_r} ^\top \mathbf{r}.

In data fitting, where the goal is to find the parameters β such that a given model function y = f(x, β) fits best some data points (xi, yi), the functions ri are the residuals

r_i(\boldsymbol \beta)= y_i - f(x_i, \boldsymbol \beta).

Then, the increment Δ can be expressed in terms of the Jacobian of the function f, as

\left( \mathbf{ J_f}^\top \mathbf{J_f} \right)\Delta = \mathbf{J_f}^\top \mathbf{r}.

Notes

The assumption m ≥ n in the algorithm statement is necessary, as otherwise the matrix JrTJr is not invertible and the normal equations cannot be solved (at least uniquely).

The Gauss–Newton algorithm can be derived by linearly approximating the vector of functions ri. Using Taylor's theorem, we can write at every iteration:

\mathbf{r}(\boldsymbol \beta)\approx \mathbf{r}(\boldsymbol \beta^s)+\mathbf{J_r}(\boldsymbol \beta^s)\Delta

with \Delta=\boldsymbol \beta-\boldsymbol \beta^s. The task of finding Δ minimizing the sum of squares of the right-hand side, i.e.,

\mathbf{min}\|\mathbf{r}(\boldsymbol \beta^s)+\mathbf{J_r}(\boldsymbol \beta^s)\Delta\|_2^2,

is a linear least squares problem, which can be solved explicitly, yielding the normal equations in the algorithm.

The normal equations are m linear simultaneous equations in the unknown increments, Δ. They may be solved in one step, using Cholesky decomposition, or, better, the QR factorization of Jr. For large systems, an iterative method, such as the conjugate gradient method, may be more efficient. If there is a linear dependence between columns of Jr, the iterations will fail as JrTJr becomes singular.

Example

Calculated curve obtained with \hat\beta_1=0.362 and \hat\beta_2=0.556 (in blue) versus the observed data (in red).

In this example, the Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions.

In a biology experiment studying the relation between substrate concentration [S] and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained.

i 1 2 3 4 5 6 7
[S] 0.038 0.194 0.425 0.626 1.253 2.500 3.740
rate 0.050 0.127 0.094 0.2122 0.2729 0.2665 0.3317

It is desired to find a curve (model function) of the form

\text{rate}=\frac{V_\text{max}[S]}{K_M+[S]}

that fits best the data in the least squares sense, with the parameters Vmax and KM to be determined.

Denote by xi and yi the value of [S] and the rate from the table, i=1, \dots, 7. Let β1 = Vmax and β2 = KM. We will find β1 and β2 such that the sum of squares of the residuals

r_i = y_i - \frac{\beta_1x_i}{\beta_2+x_i}   (i=1,\dots, 7)

is minimized.

The Jacobian \mathbf{J_r} of the vector of residuals ri in respect to the unknowns βj is an 7\times 2 matrix with the i-th row having the entries

\frac{\partial r_i}{\partial \beta_1}= -\frac{x_i}{\beta_2+x_i},\  \frac{\partial r_i}{\partial \beta_2}= \frac{\beta_1x_i}{\left(\beta_2+x_i\right)^2}.

Starting with the initial estimates of β1=0.9 and β2=0.2, after five iterations of the Gauss–Newton algorithm the optimal values \hat\beta_1=0.362 and \hat\beta_2=0.556 are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters versus the observed data.

Convergence properties

It can be shown[2] that the increment Δ is a descent direction for S, and, if the algorithm converges, then the limit is a stationary point of S. However, convergence is not guaranteed, not even local convergence as in Newton's method.

The rate of convergence of the Gauss–Newton algorithm can approach quadratic.[3] The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrix \mathbf{J_r^T  J_r} is ill-conditioned. For example, consider the problem with m = 2 equations and n = 1 variable, given by

 \begin{align}
r_1(\beta) &= \beta + 1 \\
r_2(\beta) &= \lambda \beta^2 + \beta - 1.
\end{align}

The optimum is at β = 0. If λ = 0 then the problem is in fact linear and the method finds the optimum in one iteration. If |λ| < 1 then the method converges linearly and the error decreases asymptotically with a factor |λ| at every iteration. However, if |λ| > 1, then the method does not even converge locally.[4]

Derivation from Newton's method

In what follows, the Gauss–Newton algorithm will be derived from Newton's method for function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm is at most quadratic.

The recurrence relation for Newton's method for minimizing a function S of parameters, β, is

 \boldsymbol\beta^{(s+1)} = \boldsymbol\beta^{(s)} - \mathbf H^{-1} \mathbf g \,

where g denotes the gradient vector of S and H denotes the Hessian matrix of S. Since  S = \sum_{i=1}^m r_i^2, the gradient is given by

g_j=2\sum_{i=1}^m r_i\frac{\partial r_i}{\partial \beta_j}.

Elements of the Hessian are calculated by differentiating the gradient elements, gj, with respect to βk

H_{jk}=2\sum_{i=1}^m \left(\frac{\partial r_i}{\partial \beta_j}\frac{\partial r_i}{\partial \beta_k}+r_i\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k} \right).

The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by

H_{jk}\approx 2\sum_{i=1}^m J_{ij}J_{ik}

where J_{ij}=\frac{\partial r_i}{\partial \beta_j} are entries of the Jacobian Jr. The gradient and the approximate Hessian can be written in matrix notation as

\mathbf g=2\mathbf{J_r}^\top \mathbf{r}, \quad \mathbf{H} \approx 2 \mathbf{J_r}^\top \mathbf{J_r}.\,

These expressions are substituted into the recurrence relation above to obtain the operational equations

 \boldsymbol{\beta}^{(s+1)} = \boldsymbol\beta^{(s)}+\Delta;\quad \Delta = -\left( \mathbf{J_r}^\top \mathbf{J_r} \right)^{-1} \mathbf{J_r}^\top \mathbf{r}.

Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation

\left|r_i\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k}\right| \ll \left|\frac{\partial r_i}{\partial \beta_j}\frac{\partial r_i}{\partial \beta_k}\right|

that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected.[5]

  1. The function values ri are small in magnitude, at least around the minimum.
  2. The functions are only "mildly" non linear, so that \frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k} is relatively small in magnitude.

Improved versions

With the Gauss–Newton method the sum of squares S may not decrease at every iteration. However, since Δ is a descent direction, unless S(\boldsymbol \beta^s) is a stationary point, it holds that S(\boldsymbol \beta^s+\alpha\Delta) < S(\boldsymbol \beta^s) for all sufficiently small α > 0. Thus, if divergence occurs, one solution is to employ a fraction, α, of the increment vector, Δ in the updating formula

 \boldsymbol \beta^{s+1} = \boldsymbol \beta^s+\alpha\  \Delta.

In other words, the increment vector is too long, but it points in "downhill", so going just a part of the way will decrease the objective function S. An optimal value for α can be found by using a line search algorithm, that is, the magnitude of α is determined by finding the value that minimizes S, usually using a direct search method in the interval 0 < α < 1.

In cases where the direction of the shift vector is such that the optimal fraction, α, is close to zero, an alternative method for handling divergence is the use of the Levenberg–Marquardt algorithm, also known as the "trust region method".[1] The normal equations are modified in such a way that the increment vector is rotated towards the direction of steepest descent,

\left(\mathbf{J^TJ+\lambda D}\right)\Delta=\mathbf{J}^T \mathbf{r},

where D is a positive diagonal matrix. Note that when D is the identity matrix and \lambda\to+\infty, then  \Delta/\lambda\to \mathbf{J}^T \mathbf{r}, therefore the direction of Δ approaches the direction of the gradient  \mathbf{J}^T \mathbf{r}.

The so-called Marquardt parameter, λ, may also be optimized by a line search, but this is inefficient as the shift vector must be re-calculated every time λ is changed. A more efficient strategy is this. When divergence occurs increase the Marquardt parameter until there is a decrease in S. Then, retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached when the Marquardt parameter can be set to zero; the minimization of S then becomes a standard Gauss–Newton minimization.

Related algorithms

In a quasi-Newton method, such as that due to Davidon, Fletcher and Powell or Broyden–Fletcher–Goldfarb–Shanno (BFGS) an estimate of the full Hessian, \frac{\partial^2 S}{\partial \beta_j \partial\beta_k}, is built up numerically using first derivatives \frac{\partial r_i}{\partial\beta_j} only so that after n refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minize general real-valued functions, whereas Gauss-Newton, Levenberg-Marquardt, etc. fits only to nonlinear least-squares problems.

Another method for solving minimization problems using only first derivatives is gradient descent. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.

Notes

  1. ^ a b Björck (1996)
  2. ^ Björck (1996) p260
  3. ^ Björck (1996) p341, 342
  4. ^ Fletcher (1987) p.113
  5. ^ Nocedal (1997)[page needed]

References

  • Björck, A. (1996). Numerical methods for least squares problems. SIAM, Philadelphia. ISBN 0-89871-360-9. 
  • Fletcher, Roger (1987). Practical methods of optimization (2nd ed.). New York: John Wiley & Sons. ISBN 978-0-471-91547-8. .
  • Nocedal, Jorge; Wright, Stephen (1999). Numerical optimization. New York: Springer. ISBN 0387987932. 



Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Algorithme de Gauss-Newton — En mathématiques, l algorithme de Gauss Newton est une méthode de résolution des problèmes de moindres carrés non linéaires. Elle peut être vue comme une modification de la méthode de Newton dans le cas multidimensionnel afin de trouver le… …   Wikipédia en Français

  • Algoritmo de Gauss-Newton — En matemáticas, el algoritmo de Gauss Newton se utiliza para resolver problemas no lineales de mínimos cuadrados. Es una modificación del método de optimización de Newton que no usa segundas derivadas y se debe a Carl Friedrich Gauss. El problema …   Wikipedia Español

  • Newton's method in optimization — A comparison of gradient descent (green) and Newton s method (red) for minimizing a function (with small step sizes). Newton s method uses curvature information to take a more direct route. In mathematics, Newton s method is an iterative method… …   Wikipedia

  • Newton's method — In numerical analysis, Newton s method (also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots (or zeroes) of a real valued function. The… …   Wikipedia

  • Levenberg–Marquardt algorithm — In mathematics and computing, the Levenberg–Marquardt algorithm (LMA)[1] provides a numerical solution to the problem of minimizing a function, generally nonlinear, over a space of parameters of the function. These minimization problems arise… …   Wikipedia

  • List of topics named after Carl Friedrich Gauss — Carl Friedrich Gauss (1777 ndash; 1855) is the eponym of all of the topics listed below. Topics including Gauss *Carl Friedrich Gauss Prize, a mathematics award *Degaussing, to demagnetize an object *Gauss (unit), a unit of magnetic field (B)… …   Wikipedia

  • Isaac Newton — Sir Isaac Newton …   Wikipedia

  • BHHH algorithm — BHHH is an optimization algorithm in econometrics similar to Gauss–Newton algorithm. It is an acronym of the four originators: Berndt, B. Hall, R. Hall, and Jerry Hausman.UsageIf a nonlinear model is fitted to the data one often needs to estimate …   Wikipedia

  • Criss-cross algorithm — This article is about an algorithm for mathematical optimization. For the naming of chemicals, see crisscross method. The criss cross algorithm visits all 8 corners of the Klee–Minty cube in the worst case. It visits 3 additional… …   Wikipedia

  • Expectation-maximization algorithm — An expectation maximization (EM) algorithm is used in statistics for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. EM alternates between performing an… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”