 Constraint algorithm

In mechanics, a constraint algorithm is a method for satisfying constraints for bodies that obey Newton's equations of motion. There are three basic approaches to satisfying such constraints: choosing novel unconstrained coordinates ("internal coordinates"), introducing explicit constraint forces, and minimizing constraint forces implicitly by the technique of Lagrange multipliers or projection methods.
Constraint algorithms are often applied to molecular dynamics simulations. Although such simulations are sometimes carried out in internal coordinates that automatically satisfy the bondlength and bondangle constraints, they may also be carried out with explicit or implicit constraint forces for the bond lengths and bond angles. Explicit constraint forces typically shorten the timestep significantly, making the simulation less efficient computationally; in other words, more computer power is required to compute a trajectory of a given length. Therefore, internal coordinates and implicitforce constraint solvers are generally preferred.
Contents
Mathematical background
The motion of a set of N particles can be described by a set of secondorder ordinary differential equations, Newton's second law, which can be written in matrix form
where M is a mass matrix and q is the vector of generalized coordinates that describe the particles' positions. For example, the vector q may be a 3N Cartesian coordinates of the particle positions r_{k}, where k runs from 1 to N; in the absence of constraints, M would be the 3Nx3N diagonal square matrix of the particle masses. The vector f represents the generalized forces and the scalar V(q) represents the potential energy, both of which are functions of the generalized coordinates q.
If M constraints are present, the coordinates must also satisfy M timeindependent algebraic equations
where the index j runs from 1 to M. For brevity, these functions g_{i} are grouped into an Mdimensional vector g below. The task is to solve the combined set of differentialalgebraic (DAE) equations, instead of just the ordinary differential equations (ODE) of Newton's second law.
This problem was studied in detail by Joseph Louis Lagrange, who laid out most of the methods for solving it.^{[1]} The simplest approach is to define new generalized coordinates that are unconstrained; this approach eliminates the algebraic equations and reduces the problem once again to solving an ordinary differential equation. Such an approach is used, for example, in describing the motion of a rigid body; the position and orientation of a rigid body can be described by six independent, unconstrained coordinates, rather than describing the positions of the particles that make it up and the constraints among them that maintain their relative distances. The drawback of this approach is that the equations may become unwieldy and complex; for example, the mass matrix M may become nondiagonal and depend on the generalized coordinates.
A second approach is to introduce explicit forces that work to maintain the constraint; for example, one could introduce strong spring forces that enforce the distances among mass points within a "rigid" body. The two difficulties of this approach are that the constraints are not satisfied exactly, and the strong forces may require very short timesteps, making simulations inefficient computationally.
A third approach is to use a method such as Lagrange multipliers or projection to the constraint manifold to determine the coordinate adjustments necessary to satisfy the constraints. Finally, there are various hybrid approaches in which different sets of constraints are satisfied by different methods, e.g., internal coordinates, explicit forces and implicitforce solutions.
Internal coordinate methods
The simplest approach to satisfying constraints in energy minimization and molecular dynamics is to represent the mechanical system in socalled internal coordinates corresponding to unconstrained independent degrees of freedom of the system. For example, the dihedral angles of a protein are an independent set of coordinates that specify the positions of all the atoms without requiring any constraints. The difficulty of such internalcoordinate approaches is twofold: the Newtonian equations of motion become much more complex and the internal coordinates may be difficult to define for cyclic systems of constraints, e.g., in ring puckering or when a protein has a disulfide bond.
The original methods for efficient recursive energy minimization in internal coordinates were developed by Gō and coworkers.^{[2]}^{[3]}
Efficient recursive, internalcoordinate constraint solvers were extended to molecular dynamics.^{[4]}^{[5]} Analogous methods were applied later to other systems.^{[6]}^{[7]}^{[8]}
Lagrange multiplierbased methods
In most molecular dynamics simulation, constraints are enforced using the method of Lagrange multipliers. Given a set of n linear (holonomic) constraints at the time t,
where and are the positions of the two particles involved in the kth constraint at the time t and d_{k} is the prescribed interparticle distance.
These constraint equations, are added to the potential energy function in the equations of motion, resulting in, for each of the N particles in the system
Adding the constraint equations to the potential does not change it, since all σ_{k}(t) should, ideally, be zero.
Integrating both sides of the equations of motion twice in time yields the constrained particle positions at the time t + Δt
where is the unconstrained (or uncorrected) position of the ith particle after integrating the unconstrained equations of motion.
To satisfy the constraints σ_{k}(t + Δt) in the next timestep, the Lagrange multipliers must be chosen such that
This implies solving a system of n nonlinear equations
simultaneously for the n unknown Lagrange multipliers λ_{k}.
This system of n nonlinear equations in n unknowns is best solved using Newton's method where the solution vector is updated using
where is the Jacobian of the equations σ_{k}:
Since not all particles are involved in all constraints, is blockwisediagonal and can be solved blockwise, i.e. molecule for molecule.
Furthermore, instead of constantly updating the vector , the iteration is startedwith , resulting in simpler expressions for σ_{k}(t) and . After each iteration, the unconstrained particle positions are updated using
 .
The vector is then reset to
This is repeated until the constraint equations σ_{k}(t + Δt) are satisfied up to a prescribed tolerance.
Although there are a number of algorithms to compute the Lagrange multipliers, they differ only in how they solve the system of equations, usually using QuasiNewton methods.
The SETTLE algorithm
The SETTLE algorithm^{[9]} solves the system of nonlinear equations analytically for n = 3 constraints in constant time. Although it does not scale to larger numbers of constraints, it is very often used to constrain rigid water molecules, which are present in almost all biological simulations and are usually modelled using three constraints (e.g. SPC/E and TIP3P water models).
The SHAKE algorithm
The SHAKE algorithm was the first algorithm developed to satisfy bond geometry constraints during molecular dynamics simulations.^{[10]}
It solves the system of nonlinear constraint equations using the GaussSeidel method to approximate the solution of the linear system of equations
in the Newton iteration. This amounts to assuming that is diagonally dominant and solving the kth equation only for the k unknown. In practice, we compute

λ_{k}
for all iteratively until the constraint equations σ_{k}(t + Δt) are solved to a given tolerance.
Each iteration of the SHAKE algorithm costs operations and the iterations themselves converge linearly.
A noniterative form of SHAKE was developed later.^{[11]}
Several variants of the SHAKE algorithm exist. Although they differ in how they compute or apply the constraints themselves, the constraints are still modelled using Lagrange multipliers which are computed using the GaussSeidel method.
The original SHAKE algorithm is limited to mechanical systems with a tree structure, i.e., no closed loops of constraints. A later extension of the method, QSHAKE (Quaternion SHAKE) was developed to amend this.^{[12]} It works satisfactorily for rigid loops such as aromatic ring systems but fails for flexible loops, such as when a protein has a disulfide bond.^{[13]}
Further extensions include RATTLE,^{[14]} WIGGLE^{[15]} and MSHAKE.^{[16]} RATTLE works the same way as SHAKE,^{[17]} yet using the Velocity Verlet time integration scheme. WIGGLE extends SHAKE and RATTLE by using an initial estimate for the Lagrange multipliers λ_{k} based on the particle velocities. Finally, MSHAKE computes corrections on the constraint forces, achieving better convergence.
A final modification is the PSHAKE algorithm^{[18]} for rigid or semirigid molecules. PSHAKE computes and updates a preconditioner which is applied to the constraint gradients before the SHAKE iteration, causing the Jacobian to become diagonal or strongly diagonally dominant. The thus decoupled constraints converge much faster (quadratically as opposed to linearly) at a cost of .
The MSHAKE algorithm
The MSHAKE algorithm^{[19]} solves the nonlinear system of equations using Newton's method directly. In each iteration, the linear system of equations
is solved exactly using an LU decomposition. Each iteration costs operations, yet the solution converges quadratically, requiring fewer iterations than SHAKE.
This solution was first proposed in 1986 by Ciccotti and Ryckaert^{[20]} under the title "the matrix method", yet differed in the solution of the linear system of equations. Ciccotti and Ryckaert suggest inverting the matrix directly, yet doing so only once, in the first iteration. The first iteration then costs operations, whereas the following iterations cost only operations (for the matrixvector multiplication). This improvement comes at a cost though, since the Jacobian is no longer updated, convergence is only linear, albeit at a much faster rate than for the SHAKE algorithm.
Several variants of this approach based on sparse matrix techniques were studied by Barth et al..^{[21]}
The LINCS algorithm
An alternative constraint method, LINCS (Linear Constraint Solver) was developed in 1997 by Hess, Bekker, Berendsen and Fraaije,^{[22]} and was based on the 1986 method of Edberg, Evans and Morriss (EEM),^{[23]} and a modification thereof by Baranyai and Evans (BE).^{[24]}
LINCS applies Lagrange multipliers to the constraint forces and solves for the multipliers by using a series expansion to approximate the inverse of the Jacobian :
in each step of the Newton iteration. This approximation only works for matrices with Eigenvalues smaller than 1, making the LINCS algorithm suitable only for molecules with low connectivity.
LINCS has been reported to be 34 times faster than SHAKE.^{[22]}
Hybrid methods
Hybrid methods have also been introduced in which the constraints are divided into two groups; the constraints of the first group are solved using internal coordinates whereas those of the second group are solved using constraint forces, e.g., by a Lagrange multiplier or projection method.^{[25]}^{[26]}^{[27]} This approach was pioneered by Lagrange,^{[1]} and result in Lagrange equations of the mixed type.^{[28]}
See also
 Molecular dynamics
 Software for molecular mechanics modeling
References and footnotes
 ^ ^{a} ^{b} Laplace, PS (1788). Mécanique analytique.
 ^ Noguti T, Toshiyuki; Gō N (1983). "A Method of Rapid Calculation of a 2nd Derivative Matrix of Conformational Energy for Large Molecules". Journal of the Physical Society of Japan 52 (10): 3685–3690. doi:10.1143/JPSJ.52.3685.
 ^ Abe, H; Braun W, Noguti T, Gō N (1984). "Rapid Calculation of 1st and 2nd Derivatives of Conformational Energy with respect to Dihedral Angles for Proteins: General Recurrent Equations". Computers and Chemistry 8 (4): 239–247. doi:10.1016/00978485(84)850159.
 ^ Bae, DS; Haug EJ (1988). "A Recursive Formulation for Constrained Mechanical System Dynamics: Part I. Open Loop Systems". Mechanics of Structures and Machines 15: 359–382.
 ^ Jain, A; Vaidehi N, Rodriguez G (1993). "A Fast Recursive Algorithm for Molecular Dynamics Simulation". Journal of Computational Physics 106 (2): 258–268. Bibcode 1993JCoPh.106..258J. doi:10.1006/jcph.1993.1106.
 ^ Rice, LM; Brünger AT (1994). "Torsion Angle Dynamics: Reduced Variable Conformational Sampling Enhances Crystallographic Structure Refinement". Proteins: Structure, Function, and Genetics 19 (4): 277–290. doi:10.1002/prot.340190403. PMID 7984624.
 ^ Mathiowetz, AM; Jain A, Karasawa N, Goddard III, WA (1994). "Protein Simulations Using Techniques Suitable for Very Large Systems: The Cell Multipole Method for Nonbond Interactions and the NewtonEuler Inverse Mass Operator Method for Internal Coordinate Dynamics". Proteins: Structure, Function, and Genetics 20 (3): 227–247. doi:10.1002/prot.340200304. PMID 7892172.
 ^ Mazur, AK (1997). "QuasiHamiltonian Equations of Motion for Internal Coordinate Molecular Dynamics of Polymers". Journal of Computational Chemistry 18 (11): 1354–1364. doi:10.1002/(SICI)1096987X(199708)18:11<1354::AIDJCC3>3.0.CO;2K.
 ^ Miyamoto, S; Kollman PA (1992). "SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithm for Rigid Water Models". Journal of Computational Chemistry 13 (8): 952–962. doi:10.1002/jcc.540130805.
 ^ Ryckaert, JP; Ciccotti G, Berendsen HJC (1977). "Numerical Integration of the Cartesian Equations of Motion of a System with Constraints: Molecular Dynamics of nAlkanes". Journal of Computational Physics 23 (3): 327–341. Bibcode 1977JCoPh..23..327R. doi:10.1016/00219991(77)900985.
 ^ Yoneya, M; Berendsen HJC, Hirasawa K (1994). "A Noniterative Matrix Method for Constraint MolecularDynamics Simulations". Molecular Simulations 13 (6): 395–405. doi:10.1080/08927029408022001.
 ^ Forester, TR; Smith W (1998). "SHAKE, Rattle, and Roll: Efficient Constraint Algorithms for Linked Rigid Bodies". Journal of Computational Chemistry 19: 102–111. doi:10.1002/(SICI)1096987X(19980115)19:1<102::AIDJCC9>3.0.CO;2T.
 ^ McBride, C; Wilson MR, Howard JAK (1998). "Molecular dynamics simulations of liquid crystal phases using atomistic potentials". Molecular Physics 93 (6): 955–964. doi:10.1080/002689798168655.
 ^ Andersen, Hans C. (1983). "RATTLE: A "Velocity" Version of the SHAKE Algorithm for Molecular Dynamics Calculations". Journal of Computational Physics 52: 24–34. doi:10.1016/00219991(83)900141.
 ^ Lee, SangHo; Kim Palmo, Samuel Krimm (2005). "WIGGLE: A new constrained molecular dynamics algorithm in Cartesian coordinates". Journal of Computational Physics 210: 171–182. doi:10.1016/j.jcp.2005.04.006.
 ^ Lambrakos, S. G.; J. P. Boris, E. S. Oran, I. Chandrasekhar, M. Nagumo (1989). "A Modified SHAKE algorithm for Maintaining Rigid Bonds in Molecular Dynamics Simulations of Large Molecules". Journal of Computational Physics 85 (2): 473–486. doi:10.1016/00219991(89)901605.
 ^ Leimkuhler, Benedict; Robert Skeel (1994). "Symplectic numerical integrators in constrained Hamiltonian systems". Journal of Computational Physics 112: 117–125. doi:10.1006/jcph.1994.1085.
 ^ Gonnet, Pedro (2007). "PSHAKE: A quadratically convergent SHAKE in ". Journal of Computational Physics 220 (2): 740–750. doi:10.1016/j.jcp.2006.05.032.
 ^ Kräutler, Vincent; W. F. van Gunsteren, P. H. Hünenberger (2001). "A Fast SHAKE Algorithm to Solve Distance Constraint Equations for Small Molecules in Molecular Dynamics Simulations". Journal of Computational Chemistry 22 (5): 501–508. doi:10.1002/1096987X(20010415)22:5<501::AIDJCC1021>3.0.CO;2V.
 ^ Ciccotti, G.; J. P. Ryckaert (1986). "Molecular Dynamics Simulation of Rigid Molecules". Computer Physics Reports 4 (6): 345–392. doi:10.1016/01677977(86)900225.
 ^ Barth, Eric; K. Kuczera, B. Leimkuhler, R. Skeel (1995). "Algorithms for constrained molecular dynamics". Journal of Computational Chemistry 16 (10): 1192–1209. doi:10.1002/jcc.540161003.
 ^ ^{a} ^{b} Hess, B; Bekker H, Berendsen HJC, Fraaije JGEM (1997). "LINCS: A Linear Constraint Solver for Molecular Simulations". Journal of Computational Chemistry 18 (12): 1463–1472. doi:10.1002/(SICI)1096987X(199709)18:12<1463::AIDJCC4>3.0.CO;2H.
 ^ Edberg, R; Evans DJ, Morriss GP (1986). "Constrained MolecularDynamics Simulations of Liquid Alkanes with a New Algorithm". Journal of Chemical Physics 84 (12): 6933–6939. doi:10.1063/1.450613.
 ^ Baranyai, A; Evans DJ (1990). "New Algorithm for Constrained MolecularDynamics Simulation of Liquid Benzene and Naphthalene". Molecular Physics 70: 53–63. doi:10.1080/00268979000100841.
 ^ Mazur, AK (1999). "Symplectic integration of closed chain rigid body dynamics with internal coordinate equations of motion". Journal of Chemical Physics 111 (4): 1407–1414. doi:10.1063/1.479399.
 ^ Bae, DS; Haug EJ (1988). "A Recursive Formulation for Constrained Mechanical System Dynamics: Part II. Closed Loop Systems". Mechanics of Structures and Machines 15: 481–506.
 ^ Rodriguez, G; Jain A, KreutzDelgado K (1991). "A Spatial Operator Algebra for Manipulator Modeling and Control". The International Journal for Robotics Research 10 (4): 371–381. doi:10.1177/027836499101000406.
 ^ Sommerfeld, Arnold (1952). Lectures on Theoretical Physics, Vol. I: Mechanics. New York: Academic Press. ISBN 0126546703.
Categories: Molecular dynamics
 Computational chemistry
 Molecular physics
 Computational physics
 Numerical differential equations
Wikimedia Foundation. 2010.
Look at other dictionaries:
Constraint — is an element factor or a subsystem that works as a bottleneck. It restricts an entity, project, or system (such as a manufacturing or decision making process) from achieving its potential (or higher level of output) with reference to its goal.… … Wikipedia
Constraint satisfaction problem — Constraint satisfaction problems (CSP)s are mathematical problems defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a problem as a homogeneous collection of finite… … Wikipedia
Constraint Handling Rules — (CHR) is a declarative programming language extension introduced in 1991[1][2] by Thom Frühwirth. Originally designed for developing (prototypes of) constraint programming systems, CHR is increasingly used as a high level general purpose… … Wikipedia
Constraint learning — In constraint satisfaction backtracking algorithms, constraint learning is a technique for improving efficiency. It works by recording new constraints whenever an inconsistency is found. This new constraint may reduce the search space, as future… … Wikipedia
Constraint logic programming — Programming paradigms Agent oriented Automata based Component based Flow based Pipelined Concatenative Concurrent computing … Wikipedia
Constraint optimization — In constraint satisfaction, constrained optimization (also called constraint optimization) seeks for a solution maximizing or minimizing a cost function. Contents 1 Definition 2 Solution methods 2.1 Branch and bound … Wikipedia
Constraint satisfaction — In artificial intelligence and operations research, constraint satisfaction is the process of finding a solution to a set of constraints that impose conditions that the variables must satisfy. A solution is therefore a vector of variables that… … Wikipedia
Constraint satisfaction dual problem — The dual problem is a reformulation of a constraint satisfaction problem expressing each constraint of the original problem as a variable. Dual problems only contain binary constraints, and are therefore solvable by algorithms tailored for such… … Wikipedia
Constraint programming — Programming paradigms Agent oriented Automata based Component based Flow based Pipelined Concatenative Concurrent computin … Wikipedia
Decomposition method (constraint satisfaction) — In constraint satisfaction, a decomposition method translates a constraint satisfaction problem into another constraint satisfaction problem that is binary and acyclic. Decomposition methods work by grouping variables into sets, and solving a… … Wikipedia