Basics of Using LAMMPS

Return to top-level of LAMMPS documentation.


Distribution

When you unzip/untar the LAMMPS distribution you should have 6 directories:


Making LAMMPS

The src directory contains the F90 and C source files for LAMMPS as well as several sample Makefiles for different machines. To make LAMMPS for a specfic machine, you simply type

make machine

from within the src directoy. E.g. "make sgi" or "make t3e". This should create an executable such as lmp_sgi or lmp_t3e.

In the src directory, there is one top-level Makefile and several low-level machine-specific files named Makefile.xxx where xxx = the machine name. If a low-level Makefile exists for your platform, you do not need to edit the top-level Makefile. However you should check the system-specific section of the low-level Makefile to insure the various paths are correct for your environment. If a low-level Makefile does not exist for your platform, you will need to add a suitable target to the top-level Makefile. You will also need to create a new low-level Makefile using one of the existing ones as a template. If you wish to make LAMMPS for a single-processor workstation that doesn't have an installed MPI library, you can specify the "serial" target which uses a directory of MPI stubs to link against - e.g. "make serial". You will need to make the stub library (type "make" in STUBS directory) for your workstation before doing this.

Note that the two-level Makefile system allows you to make LAMMPS for multiple platforms. Each target creates its own object directory for separate storage of its *.o files.

There are a few compiler switches of interest which can be specified in the low-level Makefiles. If you use a F90FLAGS switch of -DSYNC then synchronization calls will be made before the timing routines in integrate.f. This may slow down the code slightly, but will make the reported timings at the end of a run more accurate. The F90FLAGS setting of -DSENDRECV will use MPI_Sendrecv calls for data exchange between processors instead of MPI_Irecv, MPI_Send, MPI_Wait. The former is often slower, but on some platforms can be faster, so it is worth trying, particularly if your communication timings seem slow.

The CCFLAGS setting in the low-level Makefiles requires a FFT setting, for example -DFFT_SGI or -DFFT_T3E. This is for inclusion of the appropriate machine-specific native 1-d FFT libraries on various platforms. Currently, the supported machines and switches (used in fft_3d.c) are FFT_SGI, FFT_DEC, FFT_INTEL, FFT_T3E, and FFT_FFTW. The latter is a publicly available portable FFT library, FFTW, which you can install on any machine. If none of these options is suitable for your machine, please contact me, and we'll discuss how to add the capability to call your machine's native FFT library.

For Linux and T3E compilation, there is a also a CCFLAGS setting for KLUDGE needed (see Makefile.linux and Makefile.t3e). This is to enable F90 to call C with appropriate underscores added to C function names.


Running LAMMPS

LAMMPS is run by redirecting a text file (script) of input commands into it.

lmp_sgi < in.lj

lmp_t3e < in.lj

The script file contains commands that specify the parameters for the simulation as well as to read other necessary files such as a data file that describes the initial atom positions, molecular topology, and force-field parameters. The input_commands page describes all the possible commands that can be used. The data_format page describes the format of the data file.

LAMMPS can be run on any number of processors, including a single processor. In principle you should get identical answers on any number of processors and on any machine. In practice, numerical round-off can cause slight differences and eventual divergence of dynamical trajectories.

When LAMMPS runs, it estimates the array sizes it needs to allocate based on the problem you are simulating and the number of processors you are running on. If you run out of physical memory, you will get a F90 allocation error and the code should hang or crash. The only thing you can do about this is run on more processors or run a smaller problem. If you get an error message to the screen about "boosting" something, it means LAMMPS under-estimated the size needed for one (or more) data arrays. The "extra memory" command can be used in the input script to augment these sizes at run time. A few arrays are hard-wired to sizes that should be sufficient for most users. These are specifiend with parameter settings in the global.f file. If you get a message to "boost" one of these parameters you will have to change it and re-compile LAMMPS

Some LAMMPS errors are detected at setup; others like neighbor list overflow may not occur until the middle of a run. Except for F90 allocation errors which may cause the code to hang (with an error message) since only one processor may incur the error, LAMMPS should always print a message to the screen and exit gracefully when it encounters a fatal error. If the code ever crashes or hangs without spitting out an error message first, it's probably a bug, so let me know about it. Of course this applies to problems due to algorithmic or parallelism issues, not to physics mistakes, like specifying too big a timestep or putting 2 atoms on top of each other! One exception is that different MPI implementations handle buffering of messages differently. If the code hangs without an error message, it may be that you need to specify an MPI setting or two (usually via an environment variable) to enable buffering or boost the sizes of messages that can be buffered.


Examples

There are several sample problems in the examples directory. All of them use an input file (in.*) of commands and a data file (data.*) of initial atomic coordinates and produce one or more output files. The *.xxx.P files are outputs on P processors on a particular machine which you can compare your answers to.

(1) lj

Simple atomic simulations of Lennard-Jones atoms of 1 or 3 species with various ensembles -- NVE, NVT, NPT.

(2) charge

A few timestep simulation of a box of charged atoms for testing the Coulombic options -- cutoff, Ewald, particle-mesh Ewald (PPPM).

(3) class2

A simple test run of phenyalanine using DISCOVER cff95 class II force fields.

(4) min

An energy minimization of a transcription protein.

(5) lc

Small (250 atom) and large (6750 atom) simulations of liquid crystal molecules with various Coulombic options and periodicity settings. The large-system date file was created by using the "replicate" tool on the small-system data file.

(6) flow

2-d flow of Lennard-Jones atoms in a channel using various contraint options.

(7) polymer

Simulations of bead-spring polymer models with one chain type and two chain types (different size monomers). The two-chain system also has freely diffusing monomers. This illustrates use of the setup_chain program in the tools directory and also how to use soft potentials to untangle the initial configurations.


Other Tools

The msi2lmp directory has source code for a tool that converts MSI Discover files to LAMMPS input data files. This tool requires you to have the Discover force-field description files in order to convert those parameters to LAMMPS parameters. See the README file in the msi2lmp directory for additional information.

The tools directory has a C file called replicate.c which is useful for generating new LAMMPS data files from existing ones - e.g. scaling the atom coordinates, replicating the system to make a larger one, etc. See the comments at the top of replicate.c for instructions on how to use it.

The tools directory has a F77 program called setup_lj (compile and link with print.c) which can be used to generate a 3-d box of Lennard Jones atoms (one or more atom types) like those used in examples/lj.

The tools directory also has a F77 program called setup_chain.f (compile and link with print.c) which can be used to generate random initial polymer configurations for bead-spring models like those used in examples/polymer. It uses an input polymer definition file (see examples/polymer for two sample def files) that specfies how many chains of what length, a random number seed, etc.