Hamilton-Jacobi-Bellman equation

Hamilton-Jacobi-Bellman equation

The Hamilton-Jacobi-Bellman (HJB) equation is a partial differential equation which is central to optimal control theory.

The solution of the HJB equation is the 'value function', which gives the optimal cost-to-go for a given dynamical system with an associated cost function. Classical variational problems, for example, the brachistochrone problem can be solved using this method as well.

The equation is a result of the theory of dynamic programming which was pioneered in the 1950s by Richard Bellman and coworkers. [R. E. Bellman. Dynamic Programming. Princeton, NJ, 1957.] The corresponding discrete-time equation is usually referred to as the Bellman equation. In continuous time, the result can be seen as an extension of earlier work in classical physics on the Hamilton-Jacobi equation by William Rowan Hamilton and Carl Gustav Jacob Jacobi.

Consider the following problem in deterministic optimal control

: min int_0^T C [x(t),u(t)] ,dt + D [x(T)]

subject to

: dot{x}(t)=F [x(t),u(t)]

where x(t) is the system state, x(0) is assumed given, and u(t) for 0leq tleq T is the control that we are trying to find.For this simple system, the Hamilton Jacobi Bellman partial differential equation is

:frac{partial}{partial t} V(x,t) + min_u left{ leftlangle frac{partial}{partial x}V(x,t), F(x, u) ight angle + C(x,u) ight} = 0

subject to the terminal condition

:V(x,T) = D(x).,

The unknown V(t, x) in the above PDE is the Bellman 'value function', which represents the cost incurred from starting in state x at time t and controlling the system optimally from then until time T.The HJB equation needs to be solved backwards in time, starting from t = T and ending at t = 0. (The notation langle a,b angle means the inner product of the vectors a and b).

The HJB equation is a sufficient condition for an optimum.Fact|date=May 2008 If we can solve for V then we can find from it a control u that achieves the minimum cost.

The HJB method can be generalized to stochastic systems as well.

In general case, the HJB equation does not have a classical (smooth) solution. Several notions of generalized solutions have been developed to cover such situations, including viscosity solution (Pierre-Louis Lions and Michael Crandall), minimax solution (Andrei Izmailovich Subbotin), and others.

References


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Bellman equation — A Bellman equation (also known as a dynamic programming equation), named after its discoverer, Richard Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes… …   Wikipedia

  • Hamilton–Jacobi equation — In physics, the Hamilton–Jacobi equation (HJE) is a reformulation of classical mechanics and, thus, equivalent to other formulations such as Newton s laws of motion, Lagrangian mechanics and Hamiltonian mechanics. The Hamilton–Jacobi equation is… …   Wikipedia

  • Hamilton's principal function — The Hamilton s principal function is defined by the Hamilton–Jacobi equation (HJE), another alternative formulation of classical mechanics. This function S is related to the usual action, mathcal{S}, by fixing the initial time t {1} and endpoint… …   Wikipedia

  • Bellman — People named Bellman*Carl Michael Bellman, a Swedish poet and composer. *Jonathan Bellman, an American musicologist. *Richard Bellman, an American mathematician.Bellman may also refer to:*Bellman, also known as a town crier *Bellman, a term for a …   Wikipedia

  • Richard E. Bellman — Infobox Systems scientist H region = Control Theory era = 20th century color = #B0C4DE image caption = name = Richard E. Bellman birth = birth date|1920|8|26|df=y New York City, New York death = death date and age|1984|3|19|1920|8|26|df=y school… …   Wikipedia

  • List of partial differential equation topics — This is a list of partial differential equation topics, by Wikipedia page. Contents 1 General topics 2 Specific partial differential equations 3 Numerical methods for PDEs 4 …   Wikipedia

  • Markov decision process — Markov decision processes (MDPs), named after Andrey Markov, provide a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for… …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • List of mathematics articles (H) — NOTOC H H cobordism H derivative H index H infinity methods in control theory H relation H space H theorem H tree Haag s theorem Haagerup property Haaland equation Haar measure Haar wavelet Haboush s theorem Hackenbush Hadamard code Hadamard… …   Wikipedia

  • Constantin Carathéodory — Born 13 September 1873 …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”