# Euler–Lagrange equation

﻿
Euler–Lagrange equation

In calculus of variations, the Euler–Lagrange equation, or Lagrange's equation is a differential equation whose solutions are the functions for which a given functional is stationary. It was developed by Swiss mathematician Leonhard Euler and Italian mathematician Lagrange in the 1750s.

Because a differentiable functional is stationary at its local maxima and minima, the Euler–Lagrange equation is useful for solving optimization problems in which, given some functional, one seeks the function minimizing (or maximizing) it. This is analogous to Fermat's theorem in calculus, stating that where a differentiable function attains its local extrema, its derivative is zero.

In Lagrangian mechanics, because of Hamilton's principle of stationary action, the evolution of a physical system is described by the solutions to the Euler–Lagrange equation for the action of the system. In classical mechanics, it is equivalent to Newton's laws of motion, but it has the advantage that it takes the same form in any system of generalized coordinates, and it is better suited to generalizations (see, for example, the "Field theory" section below).

tatement

The Euler–Lagrange equation is an equation satisfied by a function "q"of a real argument "t" which is a stationary point of the functional:$displaystyle S\left(q\right) = int_a^b L\left(t,q\left(t\right),q\text{'}\left(t\right)\right), mathrm\left\{d\right\}t$where:
*"q" is the function to be found::::such that "q" is differentiable, "q"("a") = "x""a", and "q"("b") = "x""b";
*"q"&prime; is the derivative of "q"::::"TX" being the tangent bundle of "X" (the space of possible values of derivatives of functions with values in "X");
* "L" is a real-valued function with continuous first partial derivatives:::The Euler–Lagrange equation, then, is the ordinary differential equation:$L_x\left(t,q\left(t\right),q\text{'}\left(t\right)\right)-frac\left\{mathrm\left\{d\left\{mathrm\left\{d\right\}t\right\}L_v\left(t,q\left(t\right),q\text{'}\left(t\right)\right) = 0.$where "L""x" and "L""v" denote the partial derivatives of "L" with respect to the second and third arguments, respectively.

If the dimension of the space "X" is greater than 1, this is a system of differential equations, one for component::$frac\left\{partial L\left(t,q\left(t\right),q\text{'}\left(t\right)\right)\right\}\left\{partial x_i\right\}-frac\left\{mathrm\left\{d\left\{mathrm\left\{d\right\}t\right\}frac\left\{partial L\left(t,q\left(t\right),q\text{'}\left(t\right)\right)\right\}\left\{partial v_i\right\} = 0quad ext\left\{for \right\} i = 1, dots, n.$

Examples

A standard example is finding the real-valued function on the interval ["a", "b"] , such that "f"("a") = "c" and "f"("b") = "d", the length of whose graph is as short as possible. The length of the graph of "f" is::$ell \left(f\right) = int_\left\{a\right\}^\left\{b\right\} sqrt\left\{1+f\text{'}\left(x\right)^2\right\},mathrm\left\{d\right\}x,$the integrand function being nowrap|1="L"("x", "y", "y"′) = radic|1 + "y"′2 evaluated at nowrap|1=("x", "y", "y"′) = ("x", "f"("x"), "f"′("x")).

The partial derivatives of "L" are::$frac\left\{partial L\left(x, y, y\text{'}\right)\right\}\left\{partial y\text{'}\right\} = frac\left\{y\text{'}\right\}\left\{sqrt\left\{1 + y\text{'}^2 quad ext\left\{and\right\} quadfrac\left\{partial L\left(x, y, y\text{'}\right)\right\}\left\{partial y\right\} = 0.$By substituting these into the Euler–Lagrange equation, we obtain:$frac\left\{mathrm\left\{d\left\{mathrm\left\{d\right\}x\right\} frac\left\{f\text{'}\left(x\right)\right\}\left\{sqrt\left\{1 + f\text{'}\left(x\right)^2 = 0 Rightarrow frac\left\{f\text{'}\left(x\right)\right\}\left\{sqrt\left\{1 + f\text{'}\left(x\right)^2 = ext\left\{constant\right\} Rightarrow f\text{'}\left(x\right) = ext\left\{constant:\right\}$that is, the function must have constant first derivative, and thus its graph is a segment of straight line.

Classical mechanics

Particle in a conservative force field

The motion of a single particle in a conservative force field (for example, the gravitational force) can be determined by requiring the action to be stationary, by Hamilton's principle. The action for this system is:$S = int_\left\{t_0\right\}^\left\{t_1\right\} L\left(t, mathbf\left\{x\right\}\left(t\right), mathbf\left\{dot\left\{x\left(t\right)\right),mathrm\left\{d\right\}t$where x("t") is the position of the particle at time "t". The dot above is Newton's notation for the time derivative: thus ("t") is the particle velocity, v("t"). In the equation above "L" is the Lagrangian (the kinetic energy minus the potential energy)::$L\left(t, mathbf\left\{x\right\}, mathbf\left\{v\right\}\right) = frac\left\{1\right\}\left\{2\right\}m sum_\left\{i=1\right\} ^\left\{3\right\} v_i^2 - U\left(mathbf\left\{x\right\}\right),$where:
*"m" is the mass of the particle (assumed to be constant in classical physics);
*"v""i" is the "i"-th component of the vector v in a Cartesian coordinate system (the same notation will be used for other vectors);
*"U" is the potential of the conservative force.In this case, the Lagrangian does not vary with its first argument "t". (By Noether's theorem, such symmetries of the system correspond to conservation laws. In particular, the invariance of the Lagrangian with respect to time implies the conservation of energy.)

By partial differentiation of the above Lagrangian, we find::$frac\left\{partial L\left(t,mathbf\left\{x\right\},mathbf\left\{v\right\}\right)\right\}\left\{partial x_i\right\} = -frac\left\{partial U\left(mathbf\left\{x\right\}\right)\right\}\left\{partial x_i\right\} = F_i \left(mathbf\left\{x\right\}\right)quad ext\left\{and\right\} quadfrac\left\{partial L\left(t,mathbf\left\{x\right\},mathbf\left\{v\right\}\right)\right\}\left\{partial v_i\right\} = m v_i = p_i,$where the force is F = −∇"U" (the negative gradient of the potential, by definition of conservative force), and p is the momentum.By substituting these into the Euler–Lagrange equation, we obtain a system of second order differential equations for the coordinates on the particle's trajectory,:$F_i\left(mathbf\left\{x\right\}\left(t\right)\right) = frac\left\{mathrm d\right\}\left\{mathrm\left\{d\right\}t\right\} m dot\left\{x\right\}_i\left(t\right) = m ddot\left\{x\right\}_i\left(t\right),$which can be solved on the interval ["t"0, "t"1] , given the boundary values "x""i"("t"0) and "x""i"("t"1).In vector notation, this system reads:$mathbf\left\{F\right\}\left(mathbf\left\{x\right\}\left(t\right)\right) = mmathbf\left\{ddot x\right\}\left(t\right)$or, using the momentum,:$mathbf\left\{F\right\} = frac \left\{mathrm\left\{d\right\}mathbf\left\{p \left\{mathrm\left\{d\right\}t\right\}$which is Newton's second law.

Field theory

Field theories, both classical field theory and quantum field theory, deal with continuous coordinates, and like classical mechanics, has its own Euler-Lagrange equation of motion for a field,::$partial_mu left\left( frac\left\{partial mathcal\left\{L\left\{partial \left( partial_mu psi \right)\right\} ight\right) - frac\left\{partial mathcal\left\{L\left\{partial psi\right\} = 0. ,$:where :$psi ,$ is the field, and:$partial,$ is a vector differential operator:::$partial_mu = left\left(frac\left\{1\right\}\left\{c\right\} frac\left\{partial\right\}\left\{partial t\right\}, frac\left\{partial\right\}\left\{partial x\right\}, frac\left\{partial\right\}\left\{partial y\right\}, frac\left\{partial\right\}\left\{partial z\right\} ight\right). ,$

Note: Not all classical fields are assumed commuting/bosonic variables, some of them (like the Dirac field, the Weyl field, the Rarita-Schwinger field) are fermionic and so, when trying to get the field equations from the Lagrangian density, one must choose whether to use the right or the left derivative of the Lagrangian density (which is a boson) with respect to the fields and their first space-time derivatives which are fermionic/anticommuting objects.

There are several examples of applying the Euler–Lagrange equation to various Lagrangians.
*Dirac equation
*Electromagnetic tensor
*Korteweg–de Vries equation
*Quantum electrodynamics

Variations for functions in several variables

A multi-dimensional generalization comes from considering a function on "n" variables. If &Omega; is some surface, then

: $S = int_\left\{Omega\right\} L\left(f, x_1, dots , x_n, f_\left\{x_1\right\}, dots , f_\left\{x_n\right\}\right), mathrm\left\{d\right\}Omega ,!$

is extremized only if "f" satisfies the partial differential equation

: $frac\left\{partial L\right\}\left\{partial f\right\} - sum_\left\{i=1\right\}^\left\{n\right\} frac\left\{partial\right\}\left\{partial x_i\right\} frac\left\{partial L\right\}\left\{partial f_\left\{x_i = 0. ,!$

When "n" = 2 and "L" is the energy functional, this leads to the soap-film minimal surface problem.

History

The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point.

Lagrange solved this problem in 1755 and sent the solution to Euler. The two further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Their correspondence ultimately led to the calculus of variations, a term coined by Euler himself in 1766. [ [http://numericalmethods.eng.usf.edu/anecdotes/lagrange.pdf a short biography of Lagrange] ]

Proof

The derivation of the one-dimensional Euler–Lagrange equation is one of the classic proofs in mathematics. It relies on the fundamental lemma of calculus of variations.

We wish to find a function $f$ which satisfies the boundary conditions "f"("a") = "c", "f"("b") = "d", and which extremizes the cost functional

: $J = int_a^b F\left(x,f\left(x\right),f\text{'}\left(x\right)\right), dx. ,!$

We assume that "F" has continuous first partial derivatives. A weaker assumption can be used, but the proof becomes more difficult.

If "f" extremizes the cost functional subject to the boundary conditions, then any slight perturbation of "f" that preserves the boundary values must either increase "J" (if "f" is a minimizer) or decrease "J" (if "f" is a maximizer).

Let "g"&epsilon;("x") = "f"("x") + &epsilon;&eta;("x") be such a perturbation of "f", where &eta;("x") is a differentiable function satisfying &eta;("a") = &eta;("b") = 0. Then define

: $J\left(epsilon\right) = int_a^b F\left(x,g_epsilon\left(x\right), g_varepsilon\text{'}\left(x\right) \right), dx. ,!$

We now wish to calculate the total derivative of "J" with repect to "&epsilon;"

: $frac\left\{mathrm\left\{d\right\} J\right\}\left\{mathrm\left\{d\right\} varepsilon\right\} = int_a^b frac\left\{mathrm\left\{d\right\}F\right\}\left\{mathrm\left\{d\right\}epsilon\right\}\left(x,g_varepsilon\left(x\right), g_varepsilon\text{'}\left(x\right) \right), dx.$

It follows from the definition of the total derivative that

: $frac\left\{mathrm\left\{d\right\}F\right\}\left\{mathrm\left\{d\right\}epsilon\right\} = frac\left\{partial F\right\}\left\{partial x\right\}frac\left\{partial x\right\}\left\{partial varepsilon\right\} + frac\left\{partial F\right\}\left\{partial g_varepsilon\right\}frac\left\{partial g_varepsilon\right\}\left\{partial varepsilon\right\} + frac\left\{partial F\right\}\left\{partial g\text{'}_varepsilon\right\}frac\left\{partial g\text{'}_varepsilon\right\}\left\{partial varepsilon\right\} = eta\left(x\right) frac\left\{partial F\right\}\left\{partial g_varepsilon\right\} + eta\text{'}\left(x\right) frac\left\{partial F\right\}\left\{partial g_varepsilon\text{'}\right\}.$

So

: $frac\left\{mathrm\left\{d\right\} J\right\}\left\{mathrm\left\{d\right\} epsilon\right\} = int_a^b left \left[eta\left(x\right) frac\left\{partial F\right\}\left\{partial g_varepsilon\right\} + eta\text{'}\left(x\right) frac\left\{partial F\right\}\left\{partial g_varepsilon\text{'}\right\} , ight\right] ,dx.$

When "&epsilon;" = 0 we have "g""&epsilon;" = "f" and since "f" is an extreme value it follows that "J'"(0) = 0, i.e.

: $J\text{'}\left(0\right) = int_a^b left \left[ eta\left(x\right) frac\left\{partial F\right\}\left\{partial f\right\} + eta\text{'}\left(x\right) frac\left\{partial F\right\}\left\{partial f\text{'}\right\} , ight\right] ,dx = 0.$

The next crucial step is to use integration by parts on the second term, yielding

: $0 = int_a^b left \left[ frac\left\{partial F\right\}\left\{partial f\right\} - frac\left\{d\right\}\left\{dx\right\} frac\left\{partial F\right\}\left\{partial f\text{'}\right\} ight\right] eta\left(x\right),dx + left \left[ eta\left(x\right) frac\left\{partial F\right\}\left\{partial f\text{'}\right\} ight\right] _a^b.$

Using the boundary conditions on "&eta;", we get that

: $0 = int_a^b left \left[ frac\left\{partial F\right\}\left\{partial f\right\} - frac\left\{d\right\}\left\{dx\right\} frac\left\{partial F\right\}\left\{partial f\text{'}\right\} ight\right] eta\left(x\right),dx. ,!$

Applying the fundamental lemma of calculus of variations now yields the Euler–Lagrange equation

: $0 = frac\left\{partial F\right\}\left\{partial f\right\} - frac\left\{d\right\}\left\{dx\right\} frac\left\{partial F\right\}\left\{partial f\text{'}\right\}.$

Alternate proof

Given a functional

:$J = int^b_aF\left(t, y\left(t\right), y\text{'}\left(t\right)\right)dt$

on $C^1\left( \left[a, b\right] \right)$ with the boundary conditions $y\left(a\right) = A$ and $y\left(b\right) = B$, we proceed by approximating the extremal curve by a polygonal line with $n$ segments and passing to the limit as the number of segments grows arbitrarily large.

Divide the interval $\left[a, b\right]$ into $n + 1$ equal segments with endpoints $t_0 = a, t_1, t_2, ldots, t_n, t_\left\{n + 1\right\} = b$ and let $Delta t = t_k - t_\left\{k - 1\right\}$. Rather than a smooth function $y\left(t\right)$ we consider the polygonal line with vertices $\left(t_0, y_0\right),ldots,\left(t_\left\{n + 1\right\}, y_\left\{n + 1\right\}\right)$, where $y_0 = A$ and $y_\left\{n + 1\right\} = B$. Accordingly, our functional becomes a real function of $n$ variables given by

:$J\left(y_1, ldots, y_n\right) = sum^n_\left\{k = 0\right\}Fleft\left(t_k, y_k, frac\left\{y_\left\{k + 1\right\} - y_k\right\}\left\{Delta t\right\} ight\right)Delta t.$

Extremals of this new functional defined on the discrete points $t_0,ldots,t_\left\{n + 1\right\}$ correspond to points where

:$frac\left\{partial J\left(y_1,ldots,y_n\right)\right\}\left\{partial y_m\right\} = 0.$

Evaluating this partial derivative gives that

:$frac\left\{partial J\right\}\left\{partial y_m\right\} = F_yleft\left(t_m, y_m, frac\left\{y_\left\{m + 1\right\} - y_m\right\}\left\{Delta t\right\} ight\right)Delta t + F_\left\{y\text{'}\right\}left\left(t_\left\{m - 1\right\}, y_\left\{m - 1\right\}, frac\left\{y_m - y_\left\{m - 1\left\{Delta t\right\} ight\right) - F_\left\{y\text{'}\right\}left\left(t_m, y_m, frac\left\{y_\left\{m + 1\right\} - y_m\right\}\left\{Delta t\right\} ight\right).$

Dividing the above equation by $Delta t$ gives

:$frac\left\{partial\right\}\left\{partial y_m Delta t\right\} = F_yleft\left(t_m, y_m, frac\left\{y_\left\{m + 1\right\} - y_m\right\}\left\{Delta t\right\} ight\right) - frac\left\{1\right\}\left\{Delta t\right\}left \left[F_\left\{y\text{'}\right\}left\left(t_m, y_m, frac\left\{y_\left\{m + 1\right\} - y_m\right\}\left\{Delta t\right\} ight\right) - F_\left\{y\text{'}\right\}left\left(t_\left\{m - 1\right\}, y_\left\{m - 1\right\}, frac\left\{y_m - y_\left\{m - 1\left\{Delta t\right\} ight\right) ight\right] ,$

and taking the limit as $Delta t o 0$ of the right-hand side of this expression yields

:$frac\left\{delta J\right\}\left\{delta y\right\} = F_y - frac\left\{d\right\}\left\{dt\right\}F_\left\{y\text{'}\right\}.$

The term $frac\left\{delta J\right\}\left\{delta y\right\}$ denotes the variational derivative of the functional $J$, and a necessary condition for a differentiable functional to have an extremum on some function is that its variational derivative at that function vanishes.

References

*
*
*cite book | author=Izrail Moiseevish Gelfand | title=Calculus of Variations | publisher=Dover | year=1963 | id=ISBN 0-486-41448-5
* [http://www.exampleproblems.com/wiki/index.php/Calculus_of_Variations Calculus of Variations] at [http://www.exampleproblems.com Example Problems.com] "(Provides examples of problems from the calculus of variations that involve the Euler–Lagrange equations.)"

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• Equation d'Euler-Lagrange — Équation d Euler Lagrange Pour les articles homonymes, voir Lagrange. L’équation d Euler Lagrange est un résultat mathématique qui joue un rôle fondamental dans le calcul des variations. On retrouve cette équation dans de nombreux problèmes réels …   Wikipédia en Français

• Équation d'euler-lagrange — Pour les articles homonymes, voir Lagrange. L’équation d Euler Lagrange est un résultat mathématique qui joue un rôle fondamental dans le calcul des variations. On retrouve cette équation dans de nombreux problèmes réels de minimisation de… …   Wikipédia en Français

• Équation d'Euler-Lagrange — Pour les articles homonymes, voir Lagrange. L’équation d Euler Lagrange est un résultat mathématique qui joue un rôle fondamental dans le calcul des variations. On retrouve cette équation dans de nombreux problèmes réels de minimisation de… …   Wikipédia en Français

• Équations d'Euler-Lagrange — Équation d Euler Lagrange Pour les articles homonymes, voir Lagrange. L’équation d Euler Lagrange est un résultat mathématique qui joue un rôle fondamental dans le calcul des variations. On retrouve cette équation dans de nombreux problèmes réels …   Wikipédia en Français

• Euler — Leonhard Euler « Euler » redirige ici. Pour les autres significations, voir Euler (homonymie). Leonhard Euler …   Wikipédia en Français

• Equation diophantienne — Équation diophantienne Édition de 1670 des Arithmétiques de Diophante d Alexandrie. Une équation diophantienne, en mathématiques, est une équation dont les coefficients sont des nombres entiers et dont les solutions recherchées sont également… …   Wikipédia en Français

• LAGRANGE (J. L.) — Au crépuscule du XVIIIe siècle, le mathématicien Lagrange a donné au calcul des variations sa formulation générale en l’abordant de manière purement analytique; il appliquera ses méthodes à la mécanique dont il donne un exposé systématique qui… …   Encyclopédie Universelle

• Equation de Pell — Équation de Pell Fermat Pierre de Fermat montre que l équation de Pell Fermat possède toujours une infinité de solutions si m est égal à un en valeur absolue. En mathématiques et plus précisément en arithmétique, l équation de Pell Fermat est une …   Wikipédia en Français

• Équation de Pell — Fermat Pierre de Fermat montre que l équation de Pell Fermat possède toujours une infinité de solutions si m est égal à un en valeur absolue. En mathématiques et plus précisément en arithmétique, l équation de Pell Fermat est une équation… …   Wikipédia en Français

• Équation de pell — Fermat Pierre de Fermat montre que l équation de Pell Fermat possède toujours une infinité de solutions si m est égal à un en valeur absolue. En mathématiques et plus précisément en arithmétique, l équation de Pell Fermat est une équation… …   Wikipédia en Français