806

Click here to load reader

Statistical mechanics and molecular simulation

Embed Size (px)

DESCRIPTION

fisicoquímica

Citation preview

  • 1Statistical Mechanicsand

    Molecular Simulation

    M. P. AllenH. H. Wills Physics Laboratory

    University of Bristol

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • PREFACE 2

    Preface

    This course is an overview of computer simulation methods andtheir relationship with microscopic theories of liquids and experi-mental studies of liquids, covering both dynamical and structuralproperties.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • PREFACE 3

    Synopsis

    Subjects to be covered include

    Statistical ensembles and nite-size eects

    Study of phase transitions

    Structural distribution functions

    The Ornstein-Zernike equation and integral equations

    Thermodynamic perturbation theory

    Time correlation functions, linear response theory

    Transport coecients and hydrodynamics

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • PREFACE 4

    Links with spectroscopy and scattering experiments will bementioned. The intention is not to discuss in great detail theoret-ical methods applied to these problems, except to give essentialbackground to the course. Rather, we shall investigate the es-sential statistical mechanical background, explore the use of thecomputer as a research tool, and approach these problems fromthe practical simulation viewpoint.

    The course will mix together elements of computer program-ming with static and time-dependent statistical mechanics. Ex-amples will be given using Fortran-90 programs, for numericalapplications, and, to an extent, Maple for symbolic algebra. TheAdobe Acrobat form of this course will remain available on thishome page: it will grow and probably change in form as the courseprogresses. The document will include material that could not becovered in depth during the lectures.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • PREFACE 5

    These notes are intended only for students taking the course:please do not distribute them elsewhere. It is not recommendedto try printing the notes, they are best viewed electronically.

    Suggestions for improvements to these notes are always wel-come.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • PREFACE 6

    References

    The bibliography will grow as the course progresses. However,it is worth mentioning some standard references at the outset.Most of this course will be directed at the properties of simpleliquids, and the standard text on the theory of these systems isHansen and McDonald [1986]. Extensive material, particularly rel-evant for molecular rather than atomic systems, may be found inGray and Gubbins [1984]. There are many good books on statis-tical mechanics: Chandler [1987] is an excellent, and fairly short,all-round book; Friedman [1985] is good on many aspects. I amalso fond of two older booksMcQuarrie [1976] and Berne and Pecora[1976], and the classic review Barker and Henderson [1976]. Manymore books on statistical mechanics can be recommended: Hill[1960], Huang [1963], Kubo [1965], Becker [1967], Ma [1985], Privman[1990]. For a more general introduction to the area of liquids, in-

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • PREFACE 7

    cluding experimental background, start with Rowlinson and Swinton[1982].

    For background information on simulation, I cannot resist rec-ommendingAllen and Tildesley [1987], also Frenkel and Smit [1996],and the annotated reprint collection Ciccotti et al. [1987]. Therearemany Summer School proceedings, of which Binder and Ciccotti[1996] is probably the most comprehensive. Monte Carlo simula-tions are described in Binder and Stauer [1987], Binder and Heermann[1988], Binder [1984, 1986, 1992]. The following references mayalso be useful: Kalos and Whitlock [1986], Allen and Tildesley [1993].

    Other books and articles will be mentioned during the course.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • PREFACE 8

    List of Problems

    Problem 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60Problem 1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70Problem 2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79Problem 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87Problem 2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90Problem 2.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96Problem 2.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98Problem 2.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98Problem 2.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98Problem 3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108Problem 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110Problem 3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111Problem 3.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131Problem 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • PREFACE 9

    Problem 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145Problem 4.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148Problem 4.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150Problem 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185Problem 6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224Problem 6.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233Problem 7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288Problem 7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289Problem 8.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305Problem 8.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308Problem 8.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313Problem 8.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316Problem 8.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325Problem 8.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333Problem 8.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342Problem 9.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • PREFACE 10

    Problem 9.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358Problem 9.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362Problem 9.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380Problem 10.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .396Problem 10.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .404Problem 10.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .413Problem 10.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .419Problem 10.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .421Problem 10.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .424Problem 10.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .429Problem 11.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .452Problem 11.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .452Problem 11.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .453Problem 11.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .454Problem 11.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .464Problem 11.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .464

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • PREFACE 11

    Problem 12.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .467Problem 12.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .468Problem 12.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .469Problem 12.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .481Problem 12.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .482Problem 12.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .487Problem 12.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .492Problem 12.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .494Problem 12.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .505Problem 15.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .593

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 12

    1 Introduction

    1.1 Computer Simulation

    Here we discuss the why and how of computer simulation.

    1.1.1 Why Simulate?

    We carry out computer simulations in the hope of understand-ing bulk, macroscopic properties in terms of the microscopic de-tails of molecular structure and interactions. However, there is nopoint in trying to compete with conventional experiments; turningon a tap is cheaper and more realistic than simulating water. Wemust be aiming to learn something new, something that cannotbe found out in other ways.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 13

    1.1.2 Microscopic and macroscopic

    Computer simulations act as a bridge between microscopic lengthand time scales and the macroscopic world of the laboratory.

    We provide a guess at the interactions between molecules.

    We obtain exact predictions of bulk properties.

    The predictions are exact in the sense that they can be madeas accurate as we like, subject to the limitations imposed by ourcomputer budget. At the same time, the hidden detail behind bulkmeasurements can be revealed. Examples are the link between thediusion coecient and velocity autocorrelation function (the for-mer easy to measure experimentally, the latter much harder); andthe connection between equations of state and structural correla-tion functions.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 14

    Figure 1.1 Simulations as a bridge betweenmicroscopic andmacro-scopic.

    r

    v(r)

    P

    T

    s Phase diagramslg

    g(r)

    r

    Dynamicsc(t)

    t

    Structure

    Intermolecularpair potential

    r

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 15

    1.1.3 Theory and experiment

    Simulations act as a bridge in another sense: between theory andexperiment.

    We can test a theory using idealized models.

    We can conduct thought experiments.

    We can perform ab initio computer simulations.

    A well-focused simulation can help us understand what we mea-sure in the laboratory, and test a postulated explanation at a fun-damental level. Further clarication may result from carrying outthought experiments on the computer that are dicult or im-possible in the laboratory (for example, working at extremes oftemperature or pressure).

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 16

    Figure 1.2 Simulation as a bridge between theory and experiment.

    Theoretical Predictions(model system)

    (model system)Complex Fluid

    (real system)Complex Fluid

    Perform experiment

    Perform simulation

    Construct theory

    Test theory

    Test model

    Make model

    Experimental Results(real system)

    Simulation Results(model system)

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 17

    Ultimately we may want to make direct comparisons with par-ticular experimental measurements made on real materials, inwhich case a good model of molecular interactions is essential.The ultimate aim of so-called ab initio molecular dynamics is toreduce the amount of tting and guesswork in this process toa minimum [for an introduction see Galli and Pasquarello, 1993].On the other hand, we may be interested in phenomena of a rathergeneric nature, or we may simply want to discriminate betweengood and bad theories. When it comes to aims of this kind, it isnot necessary to have a perfectly realistic molecular model; onethat contains the essential physics may be quite suitable.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 18

    1.1.4 Simulation techniques

    The two main families of simulation technique are molecular dy-namics (MD) and Monte Carlo (MC). Additionally, there is a wholerange of hybrid techniques which combine features from both MCand MD.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 19

    1.1.5 Molecular dynamics

    Molecular dynamics consists of the brute-force solution of New-tons equations of motion. It is necessary to encode in the pro-gram the potential energy and force law of interaction betweenmolecules; the equations of motion are solved step-by-step. Ad-vantages of the technique:

    It corresponds closely to what happens in real life.

    We may calculate dynamical properties, as well as thermo-dynamic and structural functions.

    The technique allows ecient relaxation of collectivemodes.

    For a range ofmolecularmodels, packaged routines are avail-able.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 20

    1.1.6 Monte Carlo

    Monte Carlo can be thought of as a prescription for sampling con-gurations from a statistical ensemble. The interaction potentialenergy is coded into the program, and a procedure adopted togo from one state of the system to the next, as will be describedshortly. Advantages of the technique:

    It is a robust technique (easy to program reliably).

    We may calculate thermodynamic and structural properties,but not dynamics.

    It is relatively simple to specify external conditions (constanttemperature, pressure etc.).

    Many tricks may be devised to improve the eciency of thesampling.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 21

    1.1.7 Common features

    These simulation techniques have some important features in com-mon, governing the typical timescales and length scales that canbe investigated, and the methods we adopt to avoid unwantedsurface eects.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 22

    1.1.8 Simulation time scales

    Simulation runs are typically short (t 103106 MD steps orMC sweeps, corresponding to perhaps a few nanoseconds of realtime) compared with the time allowed in laboratory experiments.This means that we need to test whether or not a simulation hasreached equilibrium before we can trust the averages calculatedin it. Moreover, there is a clear need to subject the simulationaverages to a statistical analysis, designed to make a realistic as-sessment of the error bars.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 23

    How long should we run? This depends on the system and thephysical properties of interest. Suppose you are interested in avariable a, dened such that hai 0. Dene a time correlationfunction ha0ati relating values calculated at dierent times tapart.

    For t ! 0, ha0ati !Da2E.

    For t !1, ha0ati ! ha0i hati ! 0. This decay of correlation occurs over a characteristic timea.

    The simulation run should be signicantly longer than a.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 24

    Figure 1.3 Correlations in time.

    c(t) =

    c(t)

    t

    t a

    a(0)

    a(t)

    Time correlations

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 25

    The correlation time is formally dened

    a Z10dt ha0ati =ha2i:

    Alternatively, if time correlations decay exponentially at long time,a may be identied from the limiting form

    ha0ati / expft=ag :

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 26

    1.1.9 Simulation length scales

    The samples involved are typically quite small on the laboratoryscale: most fall in the range N 103106 particles, thus impos-ing a restriction on the length scales of the phenomena that maybe investigated: nanometre - submicron range. Indeed, in manycases, there is an overriding need to do a system-size analysis ofsimulation results, to quantify these eects.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 27

    How large a simulation do we need? Once more this dependson the system and properties of interest. Dene a spatial corre-lation function ha0ari relating values computed at dierentpoints r apart.

    For r ! 0, ha0ari !Da2E.

    For r !1, ha0ari ! ha0i hari ! 0. This decay occurs over a characteristic distance a.

    The simulation box should be signicantly larger than a.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 28

    Figure 1.4 Correlations in space.

    a(0)

    aa(r)r

    c(r) =

    c(r)

    r

    x

    Space correlations

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 29

    The correlation length is formally dened

    a Z10dr ha0ari =ha2i:

    Alternatively, if spatial correlations decay exponentially at largedistance, a may be identied from the limiting form

    ha0ari / expfr=ag :

    (Actually, in 3D, there are also some weakly r -dependent prefac-tors, but this is not crucial here.)

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 30

    It is almost essential for simulation box sizes L to be largecompared with a (for all properties of interest a), and for sim-ulation run lengths t to be large compared with a. Only thencan we guarantee that reliably-sampled statistical properties areobtained. Near critical points, special care must be taken, in thatthese inequalities will almost certainly not be satised, and indeedone may see the onset of non-exponential decay of the correlationfunctions. In these circumstances a quantitative investigation ofnite size eects and correlation times, with some considerationof the appropriate scaling laws, must be undertaken.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 31

    1.1.10 Periodic boundary conditions

    Small sample size means that, unless surface eects are of par-ticular interest, periodic boundary conditions need to be used.Consider 1000 atoms arranged in a 101010 cube. Nearly halfthe atoms are on the outer faces!

    Surrounding the cube with replicas of itself takes care of thisproblem. Provided the potential range is not too long, we canadopt the minimum image convention that each atom interactswith the nearest atom or image in the periodic array. In the courseof the simulation, if an atom leaves the basic simulation box, at-tention can be switched to the incoming image.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 32

    Figure 1.5 Periodic boundary conditions and the minimum imageconvention.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 33

    1.2 Atoms, molecules, spins

    Let us denote a state of the system by . This might refer to sys-tems with continuous degrees of freedom (atoms and molecules,plus some spin models) or to systems with discrete degrees offreedom (like the Ising and Potts models).

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 34

    1.2.1 Atomic and molecular coordinates

    For a classical atomic system let represent a set of continuousvariables, i.e. the complete set of coordinates and momenta.

    r frig r1; r2; : : : rN p fpig p1;p2; : : :pN Energy E KpV r

    Frequently we shall abbreviate frig as r , and fpig as p, pretendingthat they are 1-dimensional quantities to simplify the notation.We shall revert to the full form where necessary, but bear in mindthat p2 or

    Pp2 stands for a contraction of all the 3N componentsP

    i p2i. Likewise, rp meansPi ripi.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 35

    Figure 1.6 A system of atoms.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 36

    1.2.2 Interaction potentials

    General intention: microscopic ! macroscopic, i.e. to proceedfrom a knowledge of the interactions between atoms andmoleculesto a prediction of the structure and properties of liquids. We stickwith the classical approximation and we begin by dividing thehamiltonian H of a system of atoms into kinetic and potentialcontributions. The kinetic energy

    Kp1;p2; : : :pN NXi1p2i =2mi

    is easily handled and gives ideal-gas contributions to macroscopicproperties. The potential energy V r1; r2; : : : rN is not so easy tohandle.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 37

    Traditionally V is split into 1-body, 2-body, 3-body terms:V r1; r2; : : : rN

    Xiv1ri

    Xi

    Xj>iv2ri; rj

    Xi

    Xj>i

    Xk>jv3ri; rj; rk

    : : :Usually we drop the v1 term, neglect v3 and higher terms(which in reality probably contribute 10% of the total energy inliquids) and concentrate on v2. For brevity henceforth we justcall it vr. There is no time here to say much about the waythese potentials are determined experimentally, or modelled the-oretically [see e.g. Maitland et al., 1981, Gray and Gubbins, 1984,Sprik, 1993].

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 38

    We shall use, where appropriate, the simplest models that rep-resent the essential physics: the hard-sphere, square-well, andLennard-Jones potentials. The latter has the functional form

    vLJr 4"(r

    12r

    6): (1-1)

    The parameters (the diameter) and " (the well depth) are usedto dene reduced variables for temperature T kBT=", density 3 N3=V , volume V V=3, pressure P P3="etc.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 39

    Figure 1.7 Lennard-Jones potential with attractive and repulsivecontributions.

    0 1 2 3r/s

    2

    1

    0

    1

    2

    3

    4

    v(r)/e

    LennardJonesrepulsive r12attractive r6

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 40

    Figure 1.8 Hard sphere and square well potentials.

    0 1 2 3r/s

    2

    1

    0

    1

    2

    3

    4

    v(r)/e

    Hard sphereSquare well

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 41

    For molecular systems, we simply build the molecules out ofsite-site potentials of this form. If electrostatic charges are present,we add the appropriate Coulomb potentials

    vCoulombr Q1Q240r

    : (1-2)

    We may also use rigid-body potentials which depend on centre ofmass positions and orientations , associating angular momentawith the latter. Occasionally we shall use generalized (i.e. non-Cartesian) coordinates, distinguishing them by the symbol q.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 42

    1.2.3 Lattice spin systems

    For a lattice system, the state is specied by a set of discrete orcontinuous spin values.

    s fsig s1; s2; : : : sN si 1 (or other values) E JPhiji sisj QNVT

    Ps eEs

    J is a coupling constant.Phiji means sum over nearest neighbour spins i; j. The stan-

    dard Ising model of theoretical physics falls into this category,and there is a menagerie of other models, with varying degrees ofrelevance to the real world.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 43

    Figure 1.9 The Ising model.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 44

    Our interest in spin models will be peripheral: they are usefulto get an idea of the total number of states accessible to a givensystem, to describe (and simulate) behaviour around phase tran-sitions and, in a few cases, to actually predict the properties ofmagnetic systems or solids and liquids with orientational degreesof freedom.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 45

    1.3 Statistical Mechanics

    Enables us to calculate bulk properties frommicroscopic de-tails exactly or approximately.

    Enables us to design correct simulation methods when exactcalculations are impossible.

    Enables us to analyze the results and compare with theoryor experiment.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 46

    1.3.1 Microcanonical ensemble

    The microcanonical ensemble is supposed to correspond to anisolated system, with specied NVE.

    S kB lnNVENVE Z d E E

    %NVE 1NVE E EhAiNVE

    Zd %NVE A

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 47

    Figure 1.10 Macroscopic system modelled by microcanonical en-semble.

    adiabatic rigid wall

    System Surroundings

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 48

    1.3.2 Canonical ensemble

    The canonical ensemble corresponds to a system able to exchangeenergy with a thermal bath, representing the eects of the sur-roundings at specied NVT .

    F E TS kBT lnQNVTQNVT

    Zd eE Z dE NVE eE

    %NVT Q1NVT eEhAiNVT

    Zd %NVT A

    In a real system this would happen at the surface; in simulationswe avoid surface eects by allowing this to occur homogeneously.The state of the surroundings denes the temperature T of theensemble.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 49

    Figure 1.11 Macroscopic system modelled by canonical ensem-ble.

    System Surroundings

    isothermal rigid wall

    Temperature T

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 50

    1.3.3 Isothermal-isobaric ensemble

    The isothermal-isobaric ensemble corresponds to a system whosevolume and energy can uctuate, in exchange with its surround-ings at specied NPT .

    G F PV kBT lnQNPTQNPT

    Z10dV

    Zd eEPV Z1

    0dV QNVT ePV

    %NPT ; V Q1NPT eEPVhAiNPT

    Z10dV

    Zd %NPT A

    In a real system, some kind of piston might act at the surface, butthis is avoided in a simulation by scaling all positions homoge-neously. The state of the surroundings denes T and P .

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 51

    Figure 1.12 Macroscopic systemmodelled by isothermal-isobaricensemble.

    System Surroundings

    Temperature T

    moveable wallisothermal

    Pressure P

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 52

    1.3.4 Grand canonical ensemble

    The grand canonical ensemble corresponds to a system whosenumber of particles and energy can uctuate, in exchange with itssurroundings at specied VT .

    F N kBT lnQVTQVT

    XN

    Zd eEN X

    NQNVT eN

    %VT ; N Q1VT eENhAiVT

    XN

    Zd %VT A

    In a real system, the particle exchanges would act at the surface;in a simulation we add and remove particles at randomly selectedpositions. The surroundings dene and T .

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 53

    Figure 1.13 Macroscopic systemmodelled by grand canonical en-semble.

    System Surroundings

    Temperature T

    isothermalpermeable wall

    Chemical potential m

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 54

    1.4 Canonical Ensemble Manipulations

    Here we look at some typical formulae involving ensemble aver-ages and conversion between ensembles. A general point, whichwill come out of the discussion, is the equivalence of ensembles:this means that, for most averages, dierences between ensem-bles disappear in the thermodynamic limit.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 55

    1.4.1 Temperature derivatives

    Dierentiate with respect to 1=kBT :

    E @F@

    !

    @ lnQNVT@

    !

    Q1NVT@@

    Zd eE

    Q1NVTZd E eE hEi (1-3)

    Dierentiate again:

    kBT 2CV hE2i hEi2 DE2

    E(1-4)

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 56

    1.4.2 Volume derivatives

    Dierentiate with respect to volume

    P @F@V

    @ lnQNVT@V

    Q1NVT

    @@V

    Zd eE

    Q1NVTZd @E

    @V

    eE

    @E@V

    ) PV NkBT 13

    *Xi

    Xjiwij

    +(1-5)

    Here we assumed pairwise additivityV PiPji vij , and denedwr r dvr

    dr)

    @V@V

    1

    3V

    *Xi

    Xjiwij

    +(1-6)

    which is easily shown using scaled coordinates ri Lsi, V L3.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 57

    1.4.3 Number derivatives

    Dierentiate with respect to number (looking at the excess, non-ideal, part):

    ex @Fex

    @N

    @ lnQexNVT@N

    ! ln

    QexN1QexN

    !Separate the terms in the potential energy which involve the extraparticle VN1 VN VN1. This gives the Widom [1963] test-particle formula

    ex kBT lnZdsN1

    DeVN1E

    kBT lnDD

    eVN1EE (1-7)Once more we use scaled coordinates sN1 for the extra particleand the double average is over the ensemble and the inserted par-ticle coordinates.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 58

    Figure 1.14 Widoms test-particlemethod for calculating the chem-ical potential.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 59

    1.4.4 Virial-like formulae

    The general form, where q may be any coordinate or momentum,is *

    A@H@q

    + kBT

    *@A@q

    +(1-8)

    is easily derived by partial integration. Examples:*p2

    m

    + kBT Equipartition of energy (1-9)*

    q@H@q

    + kBT Virial theorem (1-10)

    This last equation can also be recast into the form given earlierfor the pressure, eqn (1-5). Note: often we useH for hamiltonianinterchangeably with E for energy function.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 60

    Problem 1.1 In this question we explore a hypervirial theorem.For the classical canonical ensemble

    %NVT / expH =kBTshow that

    kBTh@A=@qi hA@H =@qiand that

    kBTh@A=@pi hA@H =@pi:You will have to do an integration by parts, and you will need toassume that some functions vanish at the limits

    q;p ! 1 :What happens if we set A q or A p or A @H =@q ? (Youcan takeH p2=2mV q). Answer provided.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 61

    1.4.5 Energy distributions

    Canonical partition function QT QNVT , F kBT lnQ,

    QT Zd eE=kBT Z dE E eE=kBT

    Microcanonical phase-space volumeE NVE , S kB ln,E Z d %NVE C Z d E E

    Canonical energy distribution function PE PNVT E,

    PE hE EiNVT C E eE=kBT E eE=kBTR

    dE E eE=kBT E eE=kBT

    QT(1-11)

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 62

    1.5 Ensemble conversion

    These last formulae provide a way of seeing how the equivalenceof ensembles arises. The idea is that the distribution of energiesin the canonical ensemble is very sharp, and may related to thedelta-function distribution of the microcanonical ensemble.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 63

    Figure 1.15 Energy distributions.

    E

    P(E)

    exp(- )b E

    W (E)

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 64

    1.5.1 Relating thermodynamic functions

    represents the area of the constant-energy hypersurface de-ned by E. This rises extremely rapidly with E: for an ideal gas,for example, E / EdN=21; but this is cut o by the eE=kBTfactor.

    PE is extremely sharply peaked around the average value. QT EeE=kBT E is dened to be where the integrand has its peak.

    Take logarithms:F kBT lnQ E TS

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 65

    1.5.2 Expansions and thermodynamics

    It is informative to expand about the peak value of E. ExpandE EN (for illustration).E E EN 1 E

    E

    N EN

    (1N

    EE

    1

    2NN 1

    EE

    2 : : :

    )

    Successive terms are not getting smaller! This is useless.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 66

    Instead, expand the logarithm ln N lnE.lnE E N lnE N ln1 E

    E

    N lnE N

    EE

    1

    2

    EE

    2 : : :

    Now the series converges rapidly. So it should, since

    S kB ln and @S@E 1=T :

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 67

    1.5.3 Peak and width of the distribution

    Abbreviate 1=kBT and locate the peak in PE.

    PE/ E expfEg eSE=kB eELet the maximum be at E^ dened by

    @SE=kB E@E

    EE^

    0 ) @SE=kB@E

    EE^

    thus establishing the link @S=@E 1=T mentioned above.The width will be determined by the double derivative

    @2S=kB@E2

    @@E 2kB=CV 1kBT 2CV :

    This is negative, involves CV @E=@T , and is of ON1.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 68

    1.5.4 Ensemble conversion: corrections

    Now we have

    PE/ eSE=kB eE PE^ exp( E

    2

    2kBT 2CV

    ): (1-12)

    where E E E^. The partition function is

    QT eF ZdE EeE

    E^eE^ Z dE exp( E22kBT 2CV

    ):

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 69

    The Gaussian integral givesp2kBT 2CV so (dropping the hats)

    QT EeE q2kBT 2CVF S=kB E ln

    q2kBT 2CV : (1-13)

    The corrections are small. The last term is only OlnN, so it canbe neglected in comparison with the others, which are ON.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 70

    Problem 1.2 Here is an exercise in the technique outlined above,essentially a simple form of the saddle-point method [see e.g.Mathews and Walker, 1973]. Suppose we wish to evaluate an inte-gral of the form Z1

    0dx xnex n 1

    where n is a large number. This is actually the denition of thefactorial or gamma function n 1 n!. The integrand has a(sharp) peak; rst show that the maximum value of the integrandis at xmax n. You might like to plot the function, for variousvalues of n, using Maple.

    Write the integrand as expfn lnxxg expfx and do an ex-pansion of fx about xmax to the term involving d2f=dx2. Eval-uate the integral by extending the lower limit to 1 (this involvesonly a small error provided n is not too low). Look closely at youranswer: you have derived a famous result. Answer provided.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 71

    1.5.5 Converting averages and uctuations

    From the approach just outlined we can easily obtain

    hAiT hAiE 12

    @2A@E2

    !DE2

    ET

    hAiE 12

    @2A@E2

    !kBT 2CV (1-14)

    The subscript indicates the ensemble: constant-E or constant-T .If A is extensive, ON, the correction term isO1, i.e. small com-pared with hAi. This quanties the equivalence of ensembles.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 1 INTRODUCTION 72

    An example of the corresponding formulae for uctuations is

    DA2

    ET

    DA2

    EE kBT

    2

    CV

    @A@T

    2: (1-15)

    If A is extensive, ON, the correction term is also ON, but thisis the same order of magnitude as

    DA2

    Eitself. Quite generally,

    we see that uctuations are larger in the canonical ensemble thanin the microcanonical ensemble. A particular case of this formulais D

    E2ET kBT

    2

    CV

    @E@T

    2V kBT 2CV (1-16)

    since the energy uctuations in the microcanonical ensemble arezero. This is the same result seen before in eqn (1-4).

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 73

    2 Time-dependent statistical mechanics

    2.1 Time-dependent classical statistical mechanics

    The aim of this section is to introduce the Liouville equation whichdictates how the classical statistical mechanical distribution func-tion % ; t or %q;p; t evolves in time, and also to introduce theclassical Schrdinger and Heisenberg pictures of statistical me-chanics. Before doing that, however, we consider briey what ismeant by an ensemble.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 74

    2.1.1 What is an ensemble?

    An ensemble may be regarded as a collection of N points in themulti-dimensional phase space .

    Each point represents a complete N-particle system.

    Density of points is proportional to the distribution function%.

    Imagine the density of points to be very high indeed.

    Ensemble average of a quantity A is obtained by summingthe values for each representative point, and normalizing bythe number of such points:

    hAi 1N

    NXn1An Z d %A

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 75

    Note the meaning of N: it is not actually the total number ofpossible states, which (as we saw) might be innite. It is supposedto be the number of representative states. Some state pointsmightoccur more than once, some not at all, in our selection, as deter-mined by %.

    As time evolves each state point moves independently accord-ing to a prescribed equation of motion. For an equilibrium en-semble, the overall density in any region of phase space does notchange with time; in a given region, the same number of represen-tative systems are owing in as are owing out. For this reason,the equilibrium ensemble average hAi is constant.

    Typically, an initial nonequilibriumdistributionwill evolvewithtime into such an equilibrium distribution (unless external forcesare applied to maintain a nonequilibrium state).

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 76

    2.1.2 Ergodicity

    The equivalence of time- and ensemble-averages relies on the ideathat a single representative system would, eventually, trace a paththrough the entire phase space, returning to its starting point, hav-ing spent longer in regions of high density than low, and therebyhaving sampled %.

    One such trajectory is just as good as (in fact identical to) anyother, in this limit. However, for any realistic system, the time forthis circuit to be completed would be extremely long (the Poincarrecurrence time).

    In any real case we will be interested in the much weaker ques-tion of whether or not a given system visits a representative selec-tion of phase-space points in a reasonable time. If this is true, thetwo forms of averaging should give the same results.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 77

    2.1.3 Classical mechanics

    For more details on classical mechanics see, e.g., Goldstein [1980].The general prescription for obtaining the equations of motion ofa classical system is as follows. We start with generalized coordi-nates q and time derivatives q. Recall, in our condensed notation

    q fqig fq1; q2; : : :gp fpig fp1; p2; : : :g

    Write down the kinetic energy K and potential energy V(oftenKq and V q but not always).

    Write down the lagrangian L Lq; q K V .

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 78

    Dene conjugate momenta p @L=@q. Write down the hamiltonian

    H H q;p pq Lq; qmaking it a function of q and p by eliminating q in favourof p on the right.

    Hamiltons equations are

    q @H@p

    !and p

    @H@q

    !(2-1)

    From Hamiltons equations we can obtain Lagranges equation

    ddt

    @L@q

    !

    @L@q

    !: (2-2)

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 79

    Problem 2.1 Prove from Hamiltons equations that

    dAdt A A;H

    whereAAq;p, and the classical Poisson bracket is dened

    A;B @A@q@B@p @A@p@B@q

    !:

    Hence show that H 0, and that, in fact, the time derivative ofany function ofH is zero. Answer provided.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 80

    2.1.4 Ensembles and ow: the Liouville equation

    An ensemble of N systems is represented by a set of N points inq;p-space (phase space). The number density of such pointsanywhere, anytime, is given by N%q;p; t. A small volume ele-ment qp contains N N%q;p; tqp points. Consider theow of points into and out of such a region.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 81

    Figure 2.1 Flow in phase space.

    d q

    d q1

    d p

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 82

    inow at q1 N% q1q1q0p q

    0poutow at q1 q1 N% q1

    q1q1q0p q

    0p

    % q1q1q1q0p % q1

    q1q0p q1

    @% q1@q1

    net inow in q1 direction N@% q1@q1 qp

    @N@t

    N @@q%q @

    @p%p

    !qp

    @%@t

    @@q%q @

    @p%p

    !

    % @q@q @ p@p

    ! q@%@q p@%

    @p

    !

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 83

    This is occasionally called the non-Liouville equation. The rstbracket vanishes because of Hamiltons equations. So

    @%@t

    q@%@q p@%

    @p

    !: (2-3)

    This is the Liouville equation. It may also be written

    d%dt @%@t q@%

    @q p@%

    @p 0 (2-4)

    @%@t %;H H ; % (2-5)

    There is an analogy with the ow of an incompressible uid. Con-trast this equation for % with the time evolution equation for adynamical variable A seen in Problem 2.1:

    A A;H (2-6)

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 84

    2.2 Time-dependent quantum statistical mechanics

    This is mainly standard time-dependent quantum mechanics. Wedevelop the statistical mechanical part to emphasize the formalsimilarities with the classical case.

    2.2.1 Isolated quantum system

    Start with a time-dependent wavefunctionq; t normalized suchthat Z

    dq q; tq; t 1The Hamiltonian operator isH q. The expectation value of anyoperatorA is

    At Zdq q; tAqq; t:

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 85

    Wemay expand in a static orthonormal basis fnqg, denedso that Z

    dq mqnq mnusing time dependent expansion coecients cnt:

    q; t Xncntnq :

    To guarantee normalization we must havePm jcmj2 1. This

    gives a matrix representation of operators

    Amn Zdq mqAqnq

    and an expression for the expectation value

    At Xm

    XncntcmtAmn :

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 86

    2.2.2 Ensembles and the density matrix

    For an ensemble of systems dene

    %nmt hcntcmti (2-7)

    where h: : :i is an ensemble average. Now we can write

    hAi Xm

    Xn%nmAmn Tr%A TrA%

    where Tr is the usual matrix trace operation TrX PmXmm, andof course Tr% 1.

    We can dene the coordinate representation of %

    %q; q0 Xm

    Xnnq%nmmq0: (2-8)

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 87

    Problem 2.2 By writing this in terms of q; t, show that thiscoordinate representation is, in fact, independent of the originalchoice of basis set nq. Answer provided.

    This allows us to write

    Aq; q0 Xm

    XnmqAmnnq0:

    so

    hAi Zdq

    Zdq0 Aq; q0%q0; q Tr%A TrA%

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 88

    2.2.3 Quantum Liouville equation

    From the Schrdinger equation i@=@t H we obtain in thechosen basis set

    i@cn@t

    XlHnlcl:

    Hence (recalling thatH is Hermitian,Hml Hlm)

    i@cncm@t

    XlHnlclcm cncl Hlm:

    Now ensemble-average

    i@%nm@t

    XlHnl%lm %nlHlm :

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 89

    In operator form this becomes

    @%@t 1

    iH% %H 1

    iH ; %

    where A;B ABBA is the commutator. Dene the quantumPoisson bracket A;B 1iA;B so as to write

    @%@t H ; % %;H : (2-9)

    Contrast this with the Heisenberg equation of motion for an op-erator

    A 1iA;H A;H : (2-10)

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 90

    2.2.4 The Liouville operator

    In both the quantum and classical cases it is useful to dene aLiouville operator (or superoperator) iLA A;H so the formalsolutions of the time evolution equations are

    @%=@t iL% =) %t eiLt%0 Uyt%0 (2-11)and

    A iLA =)At eiLtA0 UtA0 (2-12)

    Problem 2.3 From the denition of the quantum Liouville oper-ator, iLA i1A;H , show that we can write

    eiLtA e iH tAe iH t:(Start by time-dierentiating this expression). Answer provided.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 91

    2.2.5 Some manipulations

    Permutation under trace: TrABC TrBCA TrCAB. Mosteasily proved by adopting the matrix representation.

    Hermitian nature of L: TrALB TrBLA. This fol-lows from the denition of L, the fact thatH is a Hermitianoperator, and use of cyclic permutation.

    Switching time displacement operator

    TrAeiLtB TreiLtAB :

    Equivalence of Heisenberg and Schrdinger pictures

    hAti Tr%At Tr%eiLtA Tr%tA TreiLt%A

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 92

    In the classical Schrdinger picture we mean

    hAti Zdq

    Zdp %q;p; tAq;p (2-13)

    In other words:

    sit at a point in phase space;

    at time t calculate q;p; t;

    average using this density.

    This is analogous to the Eulerian formulation of uid mechanics;mass points with dierent probability ow through the box dqdp.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 93

    Figure 2.2 The Schrdinger picture.

    q

    p r

    dqdp

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 94

    In the classical Heisenberg picture we mean

    hAti Zdq0

    Zdp0 %q0; p0;0Aqt; pt: (2-14)

    In other words:

    follow the representative points as they move;

    at t 0 calculate q;p;0; average the properties at time t using this initial density.

    This is analogous to the Lagrangian formulation of uid mechan-ics; the points we follow keep their initial probability but thephase-space position (and hence the properties we are averaging)change with time.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 95

    Figure 2.3 The Heisenberg picture.

    p

    r

    dqdp(0)

    dqdp(t)

    q

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 96

    Problem 2.4 We have seen the quantum $ classical correspon-dences

    Tr: : : $Z Z

    dqdp : : :

    i1A;B $ A;B Show that TrALB TrBLA in the classical case, just

    as we have seen in the quantum case. (Hint: youll need tointegrate by parts, and you can assume that functions vanishwhenever q ! 1 or p ! 1.)

    Use this result to show that hA Bi h ABi, no matterwhether we are in the quantum or classical case.

    Also verify that hAti Tr%At Tr%tA .Answer provided.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 97

    2.3 Equilibrium ensembles

    The Liouville equation applies to any ensemble, equilibrium ornot. Here we discuss the special properties of equilibrium ensem-bles.

    2.3.1 The equilibrium condition

    Equilibrium means that % should be stationary, i.e. that

    @%=@t 0 : (2-15)In other words, if we look at a phase-space volume element, therate of incoming state points should equal the rate of outow.This requires that % be a function of the constants of the motion,and especially % %H , since then %;H 0. Equilibrium alsoimplies dhAi=dt 0 for anyA.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 2 TIME-DEPENDENT STATISTICAL MECHANICS 98

    Problem 2.5 Prove dhAi=dt 0 for any Aq;p from the equi-librium condition @%=@t 0. Answer provided.

    Problem 2.6 In this question we explore the virial theorem. Theequilibrium condition is that dhAi=dt 0. Use this to show that,at equilibrium, in classical systems,

    hq@H =@qi hp@H =@pi kBT

    where the second equality follows from equipartition. Answer provided.

    Problem 2.7 For the classical canonical ensemble% / expH prove that %;A % A where : : : ; : : : is the classical Poissonbracket. Answer provided.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 99

    3 Molecular dynamics

    In this section we concentrate on the methods actually used tosolve Newtons or Hamiltons equations on the computer. This isintrinsically a simple task: many methods exist to perform step-by-step numerical integration of systems of coupled ordinary dif-ferential equations. Characteristics of these equations are:

    they are sti, i.e. there may be short and long timescales,and the algorithm must cope with both;

    calculating the forces is expensive, and should be performedas infrequently as possible.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 100

    Also we must bear in mind that the advancement of the coor-dinates fulls two functions:

    accurate calculation of dynamical properties, especially overtimes shorter than dynamical variable correlation times a;

    accurately staying on the constant-energy hypersurface, formuch longer times, over the entire length of the run.

    Exact time reversibility is highly desirable (since the original equa-tions are exactly reversible). To ensure rapid sampling of phasespace, we make the timestep as large as possible consistent withthese requirements. For these reasons, simulation algorithms havetended to be of low order (i.e. they do not involve storing highderivatives of positions, velocities etc.): this allows the time stepto be increased as much as possible without jeopardizing energyconservation.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 101

    This is unlike methods used in computing the trajectories ofastronomical bodies, where very accurate high-order methods arepreferred. Notice that it is unrealistic to expect the numericalmethod to accurately follow the true trajectory for very long times(e.g. over the whole simulation run). The ergodic and mixingproperties of classical trajectories, i.e. that nearby trajectories di-verge from each other exponentially quickly, make this impossibleto achieve.

    All these observations tend to favour the Verlet algorithm inone form or another, and we look closely at this in the followingsections.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 102

    3.1 The Verlet algorithm

    Recall that we are solving

    r v and v a or p f (3-1)

    where v p=m is the velocity and a f=m the acceleration.Alternatively we can combine the equations to give

    r a (3-2)

    These equations are more specic than the general system ofsecond-order ordinary dierential equations, in the sense that theaccelerations a depend on r but not v. This allows a simple ap-proach originally employed by Verlet [1967, 1968] in his investi-gations of the properties of the Lennard-Jones uid.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 103

    3.1.1 The original form

    It is easy to derive the Verlet algorithm from forward and back-ward Taylor expansions of the function rt and use of Newtonsequation r a, leads tort t rt tvt 12t2at 16t3btOt4rt t rt tvt 12t2at 16t3btOt4

    where b a. From this we get, by adding and subtracting,rt t 2rt rt t t2atOt4 (3-3)

    vt rt t rt t =2t Ot3 (3-4)It is possible to advance the positions using the rst of these equa-tions alone; the velocities are calculated from the second equation,but they are always one step behind: vt cannot be evaluateduntil rt t is known.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 104

    Here is some Fortran-like pseudo-code to illustrate themethod;r stands for the current positions, r_old for the positions at theprevious timestep, and r_new stores, temporarily, the new posi-tions. The forces and potential energy are calculated from thecurrent positions in a routine forces. The kinetic energy is cal-culated from the current velocities in a routine kinetic.

    call forces ( r, f, poteng )a = f/mr_new = 2*r - r_old + (delta_t**2) * av = (r_new-r_old)/(2*delta_t)call kinetic ( v, kineng )eng = kineng + potengr_old = rr = r_new

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 105

    3.1.2 The leapfrog form

    Identical trajectories are generated by the so-called leapfrog algo-rithm [see e.g. Hockney and Eastwood, 1988]. Here the velocitiesare stored at in-between times.

    rt t rt tvt 12t (3-5)vt 12t vt 12t tat (3-6)

    Starting from rt and vt 12t, the current forces ft are cal-culated, whence at, and the second equation is implemented toleap the velocities over the positions; then the rst equation isused to put the positions in front once more. The velocities ap-pear in this algorithm, but they are not contemporaneous with thepositions. If vt is wanted it can be obtained from

    vt 12vt 12t vt 12t vt 12t 12tat :

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 106

    In the following pseudo-code we use this to calculate the en-ergy.

    call forces ( r, f, poteng )a = f/mv = v + (delta_t/2) * acall kinetic ( v, kineng )eng = poteng + kinengv = v + (delta_t/2) * ar = r + delta_t * v

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 107

    3.1.3 The velocity form

    Identical trajectories are also generated by the so-called velocityVerlet algorithm [proposed by Swope et al., 1982]. Here a com-parison of forward and reverse Taylor expansions leads to

    rt t rt tvt 12t2at (3-7)vt t vt 12t at at t (3-8)

    The rst equation is implemented to advance the positions, andthe new forces ft t calculated, whence at t; these arethen used in the second equation to update the velocities. This isprobably the most convenient form of Verlets method, since thevelocities appear, and they are evaluated at the same time pointsas the positions. Moreover, as we shall see shortly there is aninteresting theoretical derivation of this version.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 108

    Here is some pseudo-code to illustrate the method.

    r = r + delta_t * v + ((delta_t**2)/2) * av = v + (delta_t/2) * acall forces ( r, f, poteng )a = f/mv = v + (delta_t/2) * acall kinetic ( v, kineng )

    Problem 3.1 Prove that identical trajectories rt will be gener-ated by the original, leapfrog, and velocity forms of the Verletalgorithm, given consistent initial conditions. Answer provided.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 109

    3.1.4 Comments

    Important features of the Verlet algorithm are

    It is exactly time reversible (as can be seen by substitutingt ! t and rearranging). This is important, as the originalequations of motion are also time reversible.

    It is low order in time. This means that it behaves in a ro-bust fashion as the timestep t is increased; in fact it can beshown that the root-mean-squared uctuations in the totalenergy are proportional to t2.

    It is easy to program.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 110

    Problem 3.2 Consider the simple 1D harmonic oscillator

    potential energy V 12m!2x2

    kinetic energy K 12mv2 ;

    hence acceleration a !2x. Obtain an algebraic expression forthe change in total energy over a timestep t, in terms of xt andvt, using the velocity Verlet method. What is the lowest-orderterm? Compare with the very simple Euler method which is

    xt t xt tvtvt t vt tat

    Can you foresee a systematic problem with the Euler method?Answer provided.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 111

    Problem 3.3 A Maple exercise. Download the Maple worksheetfor this problem from the courseWeb page. This worksheet solves,approximately, the dynamics of the simple harmonic oscillator,for given initial conditions, using the simple Euler method. It alsoshows the exact solution for comparison. Add a procedure to usethe velocity Verlet algorithm, and do a similar comparison. Exper-iment with the time step; what do you see? Answer provided.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 112

    3.2 Predictor-corrector methods

    We mention predictor-corrector methods briey, mainly for his-torical reasons: further details are available elsewhere [see e.g.Allen and Tildesley, 1987]. We store, at any time, rt, vt, atand bt at; more derivatives appear in higher-ordermethods.

    3.2.1 Predictor stage

    Taylor expansion is used to predict new positions:

    rpt t rt tvt 12t2at 16t3btvpt t vt tat 12t2btapt t at tbtbpt t bt (3-9)

    These need to be corrected.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 113

    3.2.2 Corrector stage

    The new (correct) forces act t are calculated from the posi-tions rpt t, using the force routine, and a correction factora actt aptt obtained. Then the corrected valuesare obtained as follows:

    rct t rpt t c0avct t vpt t c1aact t apt t c2abct t bpt t c3a (3-10)

    where the coecients are chosen to optimize stability and accu-racy [as discussed by Gear, 1966, 1971]. For the usual equationsof motion, c0 16 t2=2, c1 56 t, c2 1, c3 13 3=t.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 114

    3.2.3 Comments

    In principle the evaluation of forces and the correction stepcould be iterated, to further rene the new values, but thiswould be expensive.

    The approach is very general, and has been incorporated intostandard packages for solving ordinary dierential equations.

    The method is not time reversible.

    Accuracy is high, especially for short timesteps.

    Stability, asmeasured by energy conservation, degrades rapidlyas the timestep increases.

    These last two points are almost synonymous with being a high-order method.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 115

    3.3 Propagators and the Verlet algorithm

    The velocity Verlet algorithm may be derived by considering astandard approximate decomposition of the Liouville operatorwhich preserves reversibility and is symplectic (which implies thatvolume in phase space is conserved - a good thing).

    3.3.1 Trotter decomposition

    Begin with the formal propagation of positions and momenta qtpt

    ! Ut

    q0p0

    ! eiLt

    q0p0

    !(3-11)

    where the Liouville operator iL d=dt is (recall eqn (2-12))

    iL q @@q p @

    @p(3-12)

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 116

    U is unitary, i.e. U1t Ut, which means that it is re-versible. It is an exact result for any operator that

    eiLt limP!1

    eiLt

    Pwhere t t=P

    In molecular dynamics we seek to use this equation, but with niteP , and using an approximation to eiLt which becomes good in thelimit of small timestep t.

    qtpt

    ! eiLt

    q0p0

    ! e

    q @@qp @@p

    t q0p0

    !: (3-13)

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 117

    The approximation will involve splitting L into two parts

    L L1 L2 :

    The Trotter formula

    eiL1iL2t limP!1

    eiL1teiL2t

    P(3-14)

    suggests that we use the approximation

    eiL1iL2t eiL1teiL2t Ot2 :

    This is an approximation, not exact, because in general L1 and L2do not commute. Unfortunately this is not quite what we wantbecause it is not reversible. In other words propagation forwardby t then back again does not regenerate the original q0 andp0.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 118

    Instead use

    eiL1iL2t eiL1t=2 eiL2t eiL1t=2 Ot3 ~Ut : (3-15)

    This is evidently a unitary (i.e. reversible) propagator becausewhenwe propagate forward and then back we get

    eiL1t=2 eiL2t eiL1t=2

    eiL1t=2 eiL2t eiL1t=2

    and the terms cancel (starting from the middle and working out).The propagator ~U has all the desired properties and approximatesthe true propagator U. Making a suitable choice of L1 and L2 wemay break it down into an algorithm in which each molecular dy-namics timestep consists of a simple succession of operations,each involving either L1 or L2.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 119

    3.3.2 Velocity Verlet Propagator

    Tuckerman et al. [1992] consider

    iL1 p @@p f@@pma @

    @p(3-16)

    iL2 q @@q p=m@@q

    (3-17)

    To see the eect of the operators on q and p, use an operatoridentity

    ec@=@qq q cvalid for any function of q, provided that c doesnt depend on q,and similarly

    ec@=@pp p c :These are just Taylor series, written in a concise way.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 120

    The rst step, with eiL1t=2 e12t p@=@p is

    e12t p@=@p

    q0q0

    !

    q0

    q0 12ta0

    !

    q0q12t

    !: (3-18)

    How does this work? L1 just dierentiates with respect to p, sohas no eect on the coordinate q0. Its eect on q0 p0=mis straightforward since this is a linear function of p; here theconstant c is 12t p so we just get q0 12ta0. We have simplygiven a name to this in the last step. So this corresponds to thecomputer step (see the pseudo-code given previously)

    v = v + (delta_t/2) * a

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 121

    The second step with eiL2t etq @@q is

    etq@@q

    q0q12t

    !

    q0 tq12t

    q12t

    !

    qtq12t

    !: (3-19)

    L2 has no eect on the velocity q. Its eect on q0 is straightfor-ward: c is tq so we just get q0 tq12t. Notice that theoperator acts on the quantity provided to it, which is why q12tappears in this result rather than, say, q0. This is, in terms oft 0 values, q12t q0 12ta0 so multiplying in the extrafactor of t we see that this corresponds to the computer step(see the pseudo-code given previously)

    r = r + delta_t * v + ((delta_t**2)/2) * a

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 122

    The third step with eiL1t=2 e12t p@=@p is

    e12t p@=@p

    qtq12t

    !

    qt

    q12t 12tat

    !

    qtqt

    !(3-20)

    This goes very like step 1, with the constant c being 12t p andp being the force corresponding to the current coordinates. Sothis corresponds to the nal computer step (see the pseudo-codegiven previously)

    v = v + (delta_t/2) * a

    and the combined eect of all three steps is the velocity Verletalgorithm

    ~Utq0; q0 qt; qt :

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 123

    3.4 The force loop

    The other essential element of a molecular dynamics program isthe calculation of forces. The rst step is the correct dierentia-tion of the potential energy function.

    3.4.1 The basic expressions

    For simplicity we assume an atom-atom (or site-site) pairwise ad-ditive potential V r1; r2; : : : rN

    PiPj>i vri rj as discussed

    in section 1.2.2. Denoting the force on atom i due to atom j byfij :

    fij rrivrij fji rrjvrij fij :

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 124

    Usually the easiest way to express this is to apply the chainrule

    rrvr dvdr

    rrr

    dvdr

    rr

    where we used the identity

    @r@x @

    qx2 y2 z2

    @x xr

    and similarly for y and z. The bottom line is that we can expressthe force as

    fij r1ij

    dvdrij

    !rij fji (3-21)

    Note the possibility of using Newtons Third Law: having calcu-lated fij we do not need to calculate fji afresh.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 125

    3.4.2 Code optimization

    The calculation of forces is performed for every distinct pair ofatoms, and is therefore themost time-consuming part of anymolec-ular dynamics program. Accordingly, it is in the force loop thatwe must pay some attention to program eciency.

    Computer chips still are very inecient at computing squareroots and divisions, compared with additions, subtractions andmultiplications. For the Lennard-Jones potential, note that we cancompletely avoid square roots:

    vr 4r12 4r6 ) r1

    dvdr

    48r14 24r8

    which can be expressed in terms of 1=r2. Note how we have al-ready slipped into reduced units, in which the Lennard-Jones "and are set to unity. If a square root or division (as here) isunavoidable, it is best to do it just once.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 126

    This translates into Fortran-like code as

    rijsq = rxij**2 + ryij**2 + rzij**2r2ij = 1.0/rijsqr6ij = r2ij*r2ij*r2ijr12ij = r6ij*r6ijvij = r12ij - r6ijfij = (vij + r12ij)*r2ijpot = pot + vijfxij = fij*rxijfyij = fij*ryijfzij = fij*rzij

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 127

    Then the individual atomic forces are updated as follows.

    fx(i) = fx(i) + fxijfy(i) = fy(i) + fyijfz(i) = fz(i) + fzijfx(j) = fx(j) - fxijfy(j) = fy(j) - fyijfz(j) = fz(j) - fzij

    Within the loop, we often leave numerical factors (4 and 24 above)to bemultiplied into the total potential energy and the forces later.For more realistic and general potentials, some potential parame-ters inevitably appear inside the loop.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 128

    3.4.3 Minimum image and cuto

    Before the forces are calculated, the periodic boundary conditionsmust be taken into account. A simple way of applying the mini-mum image convention to the atom-atom vector is

    rxij = rx(i) - rx(j)ryij = ry(i) - ry(j)rzij = rz(i) - rz(j)rxij = rxij - anint(rxij/box)*boxryij = ryij - anint(ryij/box)*boxrzij = rzij - anint(rzij/box)*box

    The anint function in Fortran returns the nearest integer (positiveor negative), so this operation returns coordinates all in the range12L : : : 12L where L is the box length. This will work no matterhow many box lengths apart the particle coordinates are.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 129

    Dierent compilers and computer architectures vary enormouslyin the eciency with which they implement this function. Onsome machines it is better to use

    if ( abs(rxij) .gt. box2 ): rxij = rxij - sign(box,rxij)

    or even

    if ( rxij .gt. box2) rxij = rxij - boxif ( rxij .lt. -box2) rxij = rxij + box

    where box2 is 12L. The sign function returns a number having themagnitude of its rst argument and the sign of its second. Bothof these operations will only work if the coordinates are not toofar apart, as they only subtract one box length (or none).

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 130

    Therefore, to be sure that they will work, particle coordinatesmust always be reset into the periodic box whenever they strayoutside, i.e. a check must be made whenever the atoms are moved.This check has a similar form to the minimum image convention:

    rx(i) = rx(i) - anint(rx(i)/box)*boxry(i) = ry(i) - anint(ry(i)/box)*boxrz(i) = rz(i) - anint(rz(i)/box)*box

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 3 MOLECULAR DYNAMICS 131

    Problem 3.4 Download the Fortran code for this problem fromthe course home page. You will need: (a) a Fortran-77 programmd.f for Lennard-Jones atoms; (b) An include le md.inc; (c) Astarting conguration md.old.

    Compile and run the program; it will ask you to type in a fewvalues for the run (or you could give them in a small input le). Doa run of 20 blocks, each of 100 steps, with a time-step t 0:005and a potential cuto rc 2:5 (both in reduced units). The pro-gram should work, but it is not as ecient as it could be and thecalculated pressure is wrong. Fix both these problems by changingthe force subroutine. Avoid the taking of a square root, use New-tons third law to avoid considering ij and ji separately, avoidstoring the forces in a two-dimensional array, and experiment withdierent forms of the minimum image correction. Then x thevirial expression for the pressure; you should get hPi 5:5 inreduced units. Answer provided.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 132

    4 Monte Carlo

    In this section we consider in some detail how ensemble averagesmight be calculated, why the most straightforward approach isimpractical, and how we may design a practical Monte Carlo tech-nique.

    4.1 Crude sampling

    How might we evaluate a partition function (or ensemble average)using a computer? We consider this here, and then see why weneed to modify this approach in practice.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 133

    4.1.1 Sum over states: spin systems

    Consider calculating by brute force the sum over states. Thecheapest case is for a lattice spin system having nearest neigh-bour interactions.

    QNVT X expfEg

    where each state represents a set of N spins si 1. Assumethat one spins interactions with its neighbours can be computedin a few oating-point operations, taking 106s, so the energy Eof the whole system will take N 106s. The above sum contains2N terms.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 134

    If N 1000, we take about a millisecond to compute the stateenergy, but the total number of states is 2N 21000 10300. Theestimated time for the sum will be 10297 s. There is no hopewhatever of summing over all these states. Frustratingly, almostall the terms will be vanishingly small, because E will be ex-tremely high, corresponding to very unlikely spin congurationsin a Boltzmann-weighted average.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 135

    4.1.2 Life is too short

    Really, a 1000-spin or 1000-atom system is quite small: 10 molec-ular lengths across. Even a 106-spin system is not enormous. Letsturn the problem around. If wewant an answer in about twoweeks(say 106 seconds) then

    2N N 106 106 ) N 35 :We see that we may only tackle a rather small system in this bruteforce way. The factor N on the left of this equation does notaect the answer very much, so N will only increase logarithmi-cally quickly as the available computer power goes up. If someonegives us a computer next year that is twice as powerful as our cur-rent model, we will be able to tackle N 36 instead of N 35:not very encouraging. Looked at the other way, our needs growexponentially with N if we approach the problem this way.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 136

    Problem 4.1 This is not to say that direct counting is useless; re-sults for small systems can be informative. Download the Mapleworksheet for this problem from the course home page. Thisworksheet sets up a direct calculation of the partition functionand related quantities for a three-spin system. Experiment withthis, increasing the number of spins and noting the form of thevarious functions. Answer provided.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 137

    4.1.3 Sum over states: atomic systems

    For systems with continuous degrees of freedom things are evenworse. Suppose we coarse-grained the atomic coordinates andmomenta to have just 10 values each; (this is very crude indeed,and 1000might be amore reasonable value). The number of stateswill be 106N , which is huge for any reasonable value of N . Theoverwhelming proportion of these states will have very high val-ues of E, corresponding to atoms that overlap each other.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 138

    4.1.4 A cheap and nasty method

    Return to the spin system mentioned above. We can only aord,say, 106 calculations of the energy, i.e. we can only look at 106

    states of the system out of the total 10300. One approach isto do a random selection of states, assuming that they are a rep-resentative sample of the whole set. We could then proportion-ately scale up the results. This is rather like doing a Monte Carlointegration of a multi-dimensional integral instead of using thetrapezoidal rule.

    Now for the bad news. We would be outstandingly fortunateto pick a random state which did not have an unphysically highvalue of E. There is essentially no hope that our sample will berepresentative. This method will give the wrong answer (zero,most likely).

    We need a new method: importance sampling.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 139

    4.2 Importance sampling

    Now we consider a smarter approach: one based on selecting thestates in a non-random way. We concentrate on the importantstates, the ones with high expfEg. The technique is calledimportance sampling. We note immediately that this will meangiving up on the calculation of QNVT (for which the full sum isunavoidable); the method will just be good for averages.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 140

    4.2.1 The general scheme

    The most obvious way of choosing states is such that the proba-bility of selecting a state is proportional to % expfEg.This is easier than it sounds, and we shall see how to do it shortly.Then, if we have conducted Nt observations or steps in the pro-cess, the ensemble average becomes an average over steps

    hAiNVT X %A

    1Nt

    NtXt1At :

    The Boltzmann weight appears implicitly in the way the statesare chosen. This is like a time average as calculated in moleculardynamics.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 141

    4.2.2 The transition matrix

    Our method will involve designing a stochastic algorithm for step-ping from one state of the system to the next, generating a tra-jectory. This will take the form of a Markov chain, specied bytransition probabilities which are independent of the prior his-tory of the system.

    Letm and n be short for m and n, and abbreviate%m %m :

    This may then be treated as a component of a (very large) columnvector %. We are hoping that we can devise a scheme that willproduce, as an equilibrium, steady-state, ensemble, the canonicaldistribution

    %m / expfEmg :Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 142

    Consider an ensemble of systems all evolving at once.

    Specify a matrix whose elements n m give the probabil-ity of going to state n from state m, for everym;n.

    must satisfyPn n m 1 for all m, since a state must

    go somewhere.

    At each step, implement jumps with this transition matrix.

    This generates a Markov chain of states.

    Fellers theorem: subject to some reasonable conditions, thereexists a limiting (equilibrium) distribution of states and thesystem will tend towards this limiting distribution.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 143

    The matrix element n m is the conditional probability, giventhe current state m, of going to state n. (Note that sometimes,for example in Allen and Tildesley [1987], this matrix is denedwith the order of indices interchanged). This denition allows usto write the eect of the transitions on an initial distribution %0

    as a matrix equation

    %1n Xmn m%0m

    or, more concisely,%1 %0 :

    Here %t represents the distribution after t steps.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 144

    4.2.3 The limiting distribution

    Fellers theorem [Feller, 1957] states that, given and subject tosome reasonable conditions, there exists a limiting (equilibrium)distribution of states and that the system will tend towards thislimiting distribution. We want to design an algorithm, that is spec-ify a matrix , for which this is the canonical distribution.

    Turning now to the details, consider what happens to an initial,arbitrary, nonequilibrium ensemble %0, Repeated application ofthe transition matrix will produce, after t steps, a distribution

    %t %t1 t%0 :Clearly, if we reach a limiting distribution %t!1 % it will satisfy

    % % ;an eigenvector equation, with eigenvalue 1, independent of %0.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 145

    This equation has a simple physical interpretation. Writing theeigenvalue equation out in full,X

    nm n%n %m

    Xnn m%m

    we see that the left side represents the rate of arrival of systemsat state m from everywhere, and the right side is the rate of de-parture fromm to everywhere.

    Problem 4.2 Download the Maple worksheet for this problemfrom the course home page. This worksheet sets up a typical tran-sition matrix for a three-spin system (eight states), and denes aninitial distribution. Calculate the distribution after one step, theniterate for many steps. Compare the nal distribution with theeigenvector calculated directly from the eigenvects commandin Maple, corresponding to eigenvalue 1. Make a note of the othereigenvalues. Answer provided.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 146

    4.2.4 Microscopic reversibility

    How shall we choose the transition matrix? A useful (but not es-sential) restriction that wemay impose on ourselves, so as to guar-antee the truth of the last equation, is microscopic reversibility

    m n%n n m%m :

    An immediate consequence of this is that the ratio of probabil-ities %n=%m is equal to the ratio of transition matrix elementsn m=m n. This relation should be familiar to those with achemistry background: it expresses the equilibrium constant fora chemical reaction as the ratio of forward and backward rate con-stants. What we are doing here is choosing the rate constants (andwe only need to x the ratio) in order to guarantee a desired equi-librium constant.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 147

    4.2.5 The Metropolis prescription

    There are still many ways to choose . If our choice of satises%n=%m n m=m n with, in our case, the Boltzmann distribu-tion for %, we will have succeeded in devising a suitable MonteCarlo algorithm. One such prescription is due to Metropolis et al.[1953]. The elements of are written as follows:

    n m n m if %n %m m nn m n m%n=%m if %n < %m m nm m 1

    Pnmn m :

    Here, is an underlying matrix, essentially dictating the proba-bility of attempting a move like n m, and the other factors givethe probability of accepting such a move.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 148

    is symmetrical, n m m n, i.e. the probability for at-tempting a move to state n (given that you are currently in statem) should be the same as the probability for attempting a moveto statem (if you are in state n).

    Problem 4.3 Show that the Metropolis prescription is microscop-ically reversible. Hint: suppose (without loss of generality) that%n %m. Answer provided.This scheme only requires a knowledge of the ratio %n=%m, whichequals

    expfEnEmgin our case. It does not require knowledge of the factor normaliz-ing the %s, i.e. the partition function.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 149

    4.2.6 The symmetrical scheme

    Another scheme, much less frequently used, has a more symmet-rical choice of :

    n m n m

    %n%n %m

    !

    which again satises microscopic reversibility and requires onlythe ratio %n=%m.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 150

    Problem 4.4 One of the eigenvalues of is always unity; ear-lier on, for the three-spin example, you were asked to make anote of the other eigenvalues of the transition matrix. What rel-evance, if any, do you think these have to the choice between,say, the Metropolis prescription and the symmetrical scheme?Answer provided.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 151

    4.3 Selecting moves

    We have a great deal of freedom in the way in which we selectmoves. This also gives us the opportunity to get it wrong. Later weshall consider how to bias the selection process, and then correctfor the bias in the way we accept the moves, or in the way wecalculate ensemble averages. For now, the priority is to selectmoves in an unbiased way.

    In the following, we may choose a spin, atom, or molecule ran-domly, meaning with equal probability from the complete set, us-ing a random number generator. It has recently been shown thatit is also generates the correct limiting distribution if we choosethem sequentially - although this has been assumed true for manyyears, it is not immediately obvious, because the process is not aMarkov process.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 152

    4.3.1 Move selection for lattice spin system

    Typical move for constant-T Monte Carlo:

    Pick a spin at random.

    Flip it, generating new state n.

    A large number of such states exists.

    The value of n m is correspondingly small. We dont needto know it. All we need is that it be equal to m n

    Consider the reverse movem n, in the context of all pos-sible moves out of state n.

    Think! Evidently n m m n as required.

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 153

    Figure 4.1 Forward and reverse moves for a spin system.

    m

    n

    Statistical Mechanics and Molecular Simulation 'M. P. Allen 1999

  • 4 MONTE CARLO 154

    4.4 Move selection for atomic system

    Typical move for constant-NVT Monte Carlo:

    Pick an atom at random.

    Choose trial disp