The atoms in an MD simulation interact through a set of potentials that makes up a forcefield. These potentials are usually parameterized to reproduce experiment or more accurate theory. Summing the potentials as a function of atomic positions produces an overall energy for the system. The gradient of this energy then provides the force on each atom, which can be used to find the acceleration, velocity, and finally new positions of each atom.

Valence terms typically include bond stretch, angle bend, torsional rotation, and inversion center contributions. Less frequently used terms include various combinations of cross terms, pi-twist representations of torsional barriers, etc. Different forcefields may use different functional forms for the various valence terms, but most share the common characteristic that the energy is a function only of the atomic positions, together with constants depending on the types of the atoms involved in the interaction. A common set of valence terms would include the following:

(1)

(2)

(3)

(4)

Nonbonded terms typically include the Coulomb and van der Waals interactions.

(5)

(6)

Some forcefields use special nonbonded terms to handle hydrogen bonds. These nonbonded components of the energy also depend only on the atomic positions and types.

Computing the energy and its gradient, the force, for a given system is thus a matter of identifying all the atoms participating in valence or nonbonded interactions, passing their positions and types to the appropriate functions, and summing the results.

Most widely-used forcefields for atomistic simulations can be placed within this general framework.

Using the Verlet formulation for integrating the equations of motion [1,2], we obtain a procedure for updating the atomic velocities and coordinates that conserves the total energy (kinetic plus potential) in the system. This is known as microcanonical dynamics.

If we start from a minimum of the potential energy with a kinetic energy equal to twice the desired temperature, by equipartition, we should eventually reach equilibrium at the desired temperature. Since the potential surface may be complex, however, and since we cannot always start at a minimum, it may be impossible to maintain a fixed temperature within the dynamics. To overcome this limitation, we can rescale the temperature of the system periodically by appropriately modifying the velocities of the atoms. This allows us to simulate a physically reasonable temperature, at the cost of discontinuities in properties at the rescalings. Typically, the amount of rescaling needed decreases as the system reaches equilibrium, so it is possible to perform a long run without scaling to obtain thermodynamic averages after an initial equilibration period.

The equations of motion used are as follows, where are the atomic positions, velocities, forces, and masses, respectively, and is the timestep:

(7)

(8)

We use a double-half-step integration method. A half-step is taken to
determine the velocity at timestep *n* from the velocity at timestep
*n*-1/2, and another half-step is taken to go from the velocity at time
*n* to the velocity at time *n*+1/2, which is needed for the next
step. This method allows the velocity at each half-step to be determined,
which is essential for computing the system's kinetic energy at both time
*n* and *n*+1/2, while at the same time not requiring additional
storage for velocities. The kinetic energy at time *n* is reported and
contributes to the Hamiltonian, while the kinetic energy at time *n*+1/2
is used to compute
at time *n*+1/2.

The equations of motion used are as follows, where \zeta is the friction coefficient that controls heat exchange between the system and the bath, \sigma is its integral, and \tau_s is a free parameter that is related to the time scale for heat transfer to the bath:

(9)

(10)

(11)

(12)

(13)

(14)

(15)

(16)

First, we can run dynamics at zero temperature. In this method, the atoms in the system will always move along the direction of their force vector, which is the gradient of the energy. This method uses a fixed step size.

Improved convergence can be obtained by varying the step size. In particular, projecting the location of the nearest minimum along the gradient direction can be very useful. This method is known as steepest descent.

Even better convergence properties can be obtained if the past history of the minimization is used. Successive search directions are constructed so that they form a set of mutually conjugate vectors with respect to the (positive-definite) Hessian of a general convex quadratic function. Convergence can be quadratic, rather than linear as for steepest descent, and is especially improved near the minimum.

Both steepest descent and conjugate gradient minimization require additional storage to enable the directed search procedure, but this additional storage is linear in the number of atoms.

Next / Previous / Table of Contents

Kian-Tat Lim