Nonlinear regression

Nonlinear regression
See Michaelis-Menten kinetics for details

In statistics, nonlinear regression is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations.

Contents

General

The data consist of error-free independent variables (explanatory variables), x, and their associated observed dependent variables (response variables), y. Each y is modeled as a random variable with a mean given by a nonlinear function f(x,β). Systematic error may be present but its treatment is outside the scope of regression analysis. If the independent variables are not error-free, this is an errors-in-variables model, also outside this scope.

For example, the Michaelis–Menten model for enzyme kinetics

 v = \frac{V_\max[\mbox{S}]}{K_m + [\mbox{S}]}

can be written as

 f(x,\boldsymbol\beta)= \frac{\beta_1 x}{\beta_2 + x}

where β1 is the parameter Vmax , β2 is the parameter Km and [S] is the independent variable, x. This function is nonlinear because it cannot be expressed as a linear combination of the βs.

Other examples of nonlinear functions include exponential functions, logarithmic functions, trigonometric functions, power functions, Gaussian function, and Lorenz curves. Some functions, such as the exponential or logarithmic functions, can be transformed so that they are linear. When so transformed, standard linear regression can be performed but must be applied with caution. See Linearization, below, for more details.

In general, there is no closed-form expression for the best-fitting parameters, as there is in linear regression. Usually numerical optimization algorithms are applied to determine the best-fitting parameters. Again in contrast to linear regression, there may be many local minima of the function to be optimized and even the global minimum may produce a biased estimate. In practice, estimated values of the parameters are used, in conjunction with the optimization algorithm, to attempt to find the global minimum of a sum of squares.

For details concerning nonlinear data modeling see least squares and non-linear least squares.

Regression statistics

The assumption underlying this procedure is that the model can be approximated by a linear function.

 f(x_i,\boldsymbol\beta)\approx f^0+\sum_j J_{ij}\beta_j

where J_{ij}=\frac{\partial f(x_i,\boldsymbol\beta)}{\partial \beta_j}. It follows from this that the least squares estimators are given by

\hat{\boldsymbol{\beta}} \approx \mathbf { (J^TJ)^{-1}J^Ty}.

The nonlinear regression statistics are computed and used as in linear regression statistics, but using J in place of X in the formulas. The linear approximation introduces bias into the statistics. Therefore more caution than usual is required in interpreting statistics derived from a nonlinear model.

Ordinary and weighted least squares

The best-fit curve is often assumed to be that which minimizes the sum of squared residuals. This is the (ordinary) least squares (OLS) approach. However, in cases where the dependent variable does not have constant variance a sum of weighted squared residuals may be minimized; see weighted least squares. Each weight should ideally be equal to the reciprocal of the variance of the observation, but weights may be recomputed on each iteration, in an iteratively weighted least squares algorithm.

Linearization

Transformation

Some nonlinear regression problems can be moved to a linear domain by a suitable transformation of the model formulation.

For example, consider the nonlinear regression problem (ignoring the error):

 y = a e^{b x}. \,\!

If we take a logarithm of both sides, it becomes

 \ln{(y)} = \ln{(a)} + b x, \,\!

suggesting estimation of the unknown parameters by a linear regression of ln(y) on x, a computation that does not require iterative optimization. However, use of a nonlinear transformation requires caution. The influences of the data values will change, as will the error structure of the model and the interpretation of any inferential results. These may not be desired effects. On the other hand, depending on what the largest source of error is, a nonlinear transformation may distribute your errors in a normal fashion, so the choice to perform a nonlinear transformation must be informed by modeling considerations.

For Michaelis–Menten kinetics, the linear Lineweaver–Burk plot

 \frac{1}{v} = \frac{1}{V_\max} + \frac{K_m}{V_\max[S]}

of 1/v against 1/[S] has been much used. However, since it is very sensitive to data error and is strongly biased toward fitting the data in a particular range of the independent variable, [S], its use is strongly discouraged.

Segmentation

Yield of mustard and soil salinity
Main article: Segmented regression

The independent or explanatory variable (say X) can be split up into classes or segments and linear regression can be performed per segment. Segmented regression with confidence analysis may yield the result that the dependent or response variable (say Y) behaves differently in the various segments.[1]

The figure shows that the soil salinity (X) initially exerts no influence on the crop yield (Y) of mustard (colza), until a critical or threshold value (breakpoint), after which the yield is affected negatively.[2]

See also

References

  1. ^ R.J.Oosterbaan, 1994, Frequency and Regression Analysis. In: H.P.Ritzema (ed.), Drainage Principles and Applications, Publ. 16, pp. 175-224, International Institute for Land Reclamation and Improvement (ILRI), Wageningen, The Netherlands. ISBN 90 70754 3 39 . Download as PDF : [1]
  2. ^ R.J.Oosterbaan, 2002. Drainage research in farmers' fields: analysis of data. Part of project “Liquid Gold” of the International Institute for Land Reclamation and Improvement (ILRI), Wageningen, The Netherlands. Download as PDF : [2]. The figure was made with the SegReg program, which can be downloaded freely from [3]

Further reading

  • G.A.F Seber and C.J. Wild. Nonlinear Regression. New York: John Wiley and Sons, 1989.
  • Meade, N. and T. Islam (1995) "Prediction Intervals for Growth Curve Forecasts" Journal of Forecasting, 14:413–430.
  • K. Schittkowski. Data Fitting in Dynamical Systems. Kluwer, 2002.
  • R.M. Bethea, B.S. Duran and T.L. Boullion. Statistical Methods for Engineers and Scientists. New York: Marcel Dekker, Inc 1985 ISBN 0-8247-7227-X

Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Nonlinear Regression — A form of regression analysis in which data is fit to a model expressed as a mathematical function. Simple linear regression relates two variables (X and Y) with a straight line (y = mx + b), while nonlinear regression must generate a line… …   Investment dictionary

  • Regression — could refer to:* Regression (psychology), a defensive reaction to some unaccepted impulses * Past life regression, (psychology) a process claiming to retrieve memories of previous lives * Software regression, (software engineering) the appearance …   Wikipedia

  • Regression analysis — In statistics, regression analysis is a collective name for techniques for the modeling and analysis of numerical data consisting of values of a dependent variable (response variable) and of one or more independent variables (explanatory… …   Wikipedia

  • Regression toward the mean — In statistics, regression toward the mean (also known as regression to the mean) is the phenomenon that if a variable is extreme on its first measurement, it will tend to be closer to the average on a second measurement, and a fact that may… …   Wikipedia

  • Regression discontinuity design — In statistics, econometrics, epidemiology and related disciplines, a regression discontinuity design (RDD) is a design that elicits the causal effects of interventions by exploiting a given exogenous threshold determining assignment to treatment …   Wikipedia

  • Regression dilution — is a statistical phenomenon also known as attenuation . Consider fitting a straight line for the relationship of an outcome variable y to a predictor variable x, and estimating the gradient (slope) of the line. Statistical variability,… …   Wikipedia

  • Regression discontinuity — In the design of experiments, regression discontinuity (RD) designs are designs that evaluate causal effects of interventions, in which assignment to a treatment is determined at least partly by the value of an observed covariate lying on either… …   Wikipedia

  • Linear regression — Example of simple linear regression, which has one independent variable In statistics, linear regression is an approach to modeling the relationship between a scalar variable y and one or more explanatory variables denoted X. The case of one… …   Wikipedia

  • Local regression — LOESS, or locally weighted scatterplot smoothing, is one of many modern modeling methods that build on classical methods, such as linear and nonlinear least squares regression. Modern regression methods are designed to address situations in which …   Wikipedia

  • Robust regression — In robust statistics, robust regression is a form of regression analysis designed to circumvent some limitations of traditional parametric and non parametric methods. Regression analysis seeks to find the effect of one or more independent… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”