Jump to content

Wang and Landau algorithm

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Malcolm McLean (talk | contribs) at 14:31, 19 May 2008 (Created page with 'The Wang and Landau algorithm is an extension of [Metropolis Monte Carlo] sampling. Initially designed for physical systems, the Metropolis Monte Carlo method was ...'). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

The Wang and Landau algorithm is an extension of [Metropolis Monte Carlo] sampling.

Initially designed for physical systems, the Metropolis Monte Carlo method was based on [Boltzmann]'s insight that at a given temperature molecules are distributed between high energy, or unfavourable states, and low energy, or favourable states, with a probability given by both the energy difference and the density of states. Thus if there is an extremely large number of less-favoured states, the less-favoured states will dominate, if the temperature is sufficiently high to overcome the energy difference. This is seen, for instance, when wax melts. At a low temperature only the favourable, solid states are reachable. As temperature rises, a huge number of less favoured states becomes possible, and the solid turns into a liquid.

The Wang and Landau algorithm is designed to calculate the density of states of a computer-simulated system, such as an [Ising model] of spin glasses, or model atoms in a molecular forcefield.

Firstly the minimum and maximum possible states of the system are calculated. For instance for an Ising model the most favourable state is when all cells are spinning in the same direction, and the least favourable state is a chequerboard pattern. The range is then divided into a given number of discrete histogram entries.

Intially the density of states is unknown so all densities are set to zero. In fact densities typically range over such a huge interval that we use log space. A visists histogram is also maintained. Intially all bins have zero visits. The algorithm then runs a Monte Carlo-like simulation, filling the visits histogram. However instead of using the Metroplis criterion for acceptance, the probability of acceptance is given by the density of states. Call the density of states histogram g. Moves are accepted if

p = min[ 1, g(E)/g(E') ]

p is a random number on 0-1, E is the energy of the current state, E' the energy of the proposed move.

If the move is rejected the current histogram bin is incremented. The visits histogram is incremented by 1 on each move, the density of states histogram g(E) is multiplied by a constant factor, intially usually 2.72 [Euler's number]. When the visits histogram, call it H, is flat g(E) contains an accurate estimate of the density of states, within the limits of the modification factor.

The vists histogram H(E) is then reset to zero, and the modification factor reduced, typically to the square root of the previous factor, to produce a finer estimate of g(E).

The algorithm works because the move function is, by definition, sampling from the true density of states. So if acceptance by the reciprocal of the density of states produces and even histogram, the estimate must be accurate. In fact H(E) can never reach perfect flatness, so some criterion must be used - typically 10% difference or less between the highest and lowest entry.

The density of states is temperature independent. However it can be used to determine what the state of the system will be for any given temperature.


F. Wang and D.P. Landau Efficient Multiple Range Random Walk Algorithm to Calculate Density of States. Phys Rev Lett 86, 2050 (2001)