70
M ´ ETODOS COMPUTACIONALES EN ıSICA DE LA MATERIA CONDENSADA (COMPUTATIONAL METHODS IN CONDENSED–MATTER PHYSICS) http://www.uam.es/enrique.velasco/master/cmcmp aster en F´ ısica de la Materia Condensada y Nanotecnolog´ ıa PART I E. Velasco Departamento de F´ ısica Te´ orica de la Materia Condensada Facultad de Ciencias Universidad Aut´ onoma de Madrid 1

M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

Embed Size (px)

Citation preview

Page 1: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

METODOS COMPUTACIONALES ENFıSICA DE LA MATERIA CONDENSADA

(COMPUTATIONAL METHODS IN

CONDENSED–MATTER PHYSICS)

http://www.uam.es/enrique.velasco/master/cmcmp

Master en Fısica de la Materia Condensaday Nanotecnologıa

PART I

E. Velasco

Departamento de Fısica Teorica de la Materia

Condensada

Facultad de Ciencias

Universidad Autonoma de Madrid

1

Page 2: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

CONTENTS

Bibliography 3Tools used in this course 3A brief introduction to computation in Condensed Matter Physics 4The Monte Carlo method 6The method of Molecular Dynamics 54List of computer codes 69

2

Page 3: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

BIBLIOGRAPHY

Most of the topics discussed in these notes are reviewed in more detail in one of thefollowing textbooks:

• D. Frenkel and B. Smit, Understanding molecular simulation: from algorithms toapplications (Academic Press Inc., San Diego, 1996).

• M. P. Allen and D. J. Tildesley, Computer Simulation of Liquids (Oxford University,1987)

The following research paper may also be useful:

• L. Verlet, Computer ”Experiments” on Classical Fluids. I. Thermodynamical Prop-erties of Lennard-Jones Molecules, Phys. Rev. 159, 98 (1967).

• Francis H. Ree and William G. Hoover, Fifth and Sixth Virial Coefficients for HardSpheres and Hard Disks, J. Chem. Phys. 40, 939 (1964).

Copies of these papers can be downloaded from the course web page:

http://www.uam.es/enrique.velasco/master/cmcmp

In this web page a more complete bibliography can be found.

TOOLS USED IN THIS COURSE

These notes contain a number of computer codes written in the FORTRAN language, whichcan be downloaded from the course web page. Therefore access to a FORTRAN compileris required. Also, use of a graphics package, such as gnuplot, will be necessary. Thepractical problems of the course, to be done in the PC lab, will require use of these tools.

3

Page 4: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

1. A BRIEF INTRODUCTION TO COMPUTATION IN CONDENSEDMATTER PHYSICS

Computational methods are nowadays essential to explore the behaviour of condensedphases of matter. By ‘computational methods’ we do not mean methods that simply relyheavily on the use of the computer to solve a particular problem; for instance, a meso-scopic Hamiltonian of the Ginzburg–Landau type, in terms of fields representing localcoarse–grained order parameters, could be formulated to obtain the structure of a com-plex fluid limited in space by some complicated boundaries. For sure, this problem woulddemand heavy computational resources since the functional representing the Hamiltonianwould have to be minimised with respect to a huge number of variables.

Rather, the methods we are set ourselves to describe (albeit in a sketchy way) are methodsthat are formulated in terms of microscopic variables defining the state of assemblages ofparticles, either quantum or classical, and interactions between these particles. Particlevariables appear explicitely in the problem, which is then microscopic in nature, and wewould like to follow the evolution of these particles in time, and/or simply obtain averageproperties of the system in order to make contact with the macroscopic properties. Inessence, we are looking here at statistical mechanics from a purely microscopic point ofview, using the basic methods of classical and quantum mechanics, and discuss methodsto solve the microscopic equations describing the microscopic variables. These methodscan be properly called ‘computer simulation’.

In the last 60 years computers have brought about a complete upheaval of the field.Today it is possible to follow the dynamical evolution of systems of millions of particlesinteracting in more or less complicated ways on a computer. This is still far from thethermodynamic limit, or may even be far from typical numbers in surface science, butfinite-sized systems of this kind already contain many of the essential properties of realsystems. The aims and methods of statistical mechanics can now be explicitely put towork on a computer. The two basic methods of computational statistical mechanics, theMonte Carlo (MC) method and the Molecular Dynamics (MD) method will be introducedin the first part of this course. The two basic approaches of statistical mechanics, basedon time averages and ensemble averages, are played by the MD and MC methods, respec-tively.

The role played by computer simulation is two-fold. Given a physical system, one de-vises a model to describe the essential features of the system. Now this model can beanalysed using theoretical approaches or computer simulation. Since the first necessarilyentail approximations, whereas the second are essentially exact, a comparison of these twoallows to draw conclusions as to the accuracy of the theoretical approximations involvedin the theoretical approach. Experiments, on the other hand, are made directly on thereal system, and a comparison with simulation results can be made to obtain informationon the adequacy of the model used to describe the real system. Computer simulation thusplays a two–fold role that has helped improve our understanding of matter very signifi-cantly. Computer simulation does not lack limitations and problems, and has become aspecialised field which requires good training, in much the same way as an experimenteror theoretician are experts in their own fields. The division between ‘simulator’ and the-

4

Page 5: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

oretician or experimenter, however, is not clearcut.

Limitations of computer simulations

There are spatial limitations in computer simulation. Systems of interest are usuallyvery large (macroscopic) in size, and properties may sometimes be sensitive to this. Itis then necessary to apply finite-size scaling theory to extract the thermodynamic limit.Also, close to second–order phase transitions or in fluid interfaces, large fluctuations mustbe treated with care and may require larger system sizes than amenable to computationalresources available.

Also, there are time limitations. For example, in aggregation dynamics, phenomena suchas critical slowing down near second–order phase transitions, or dynamics in inhomo-geneous systems, accessible computational times may sometimes be too short. Whendealing with time scales close to typical hydrodynamical time scales (seconds), i.e. whenthere is macroscopic flow, long simulations are required. Relaxation times associated tocovalent bonds, molecular conformations or translations are usually dealt with by presentcomputers without too much effort.

5

Page 6: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

2. THE MONTE CARLO METHOD

In physics and mathematics, any method that makes use of random numbers is calledMonte Carlo method. We will start with a short introduction to different methods togenerate random deviates, or random numbers, in computational physics, and then willillustrate the importance-sampling technique in thermal physics using the Ising model.

2.1. How to generate uniformly distributed random numbers

The general problem that we set ourselves to solve is how to generate random devi-ates x, distributed according to some particular probability function f(x) in the interval[a, b]. Remember that

f(x)dx = probability that x lies within the interval [x, x + dx]

All FORTRAN compilers come with intrinsic functions that generate uniformly distributedrandom numbers, but very often we lack information as to which methodology is used, orwhat the performance of these generators is. Sometimes these routines are not sufficientlyaccurate for computations in statistical physics.

Virtually all methods that are available to solve this problem rely on methods to generateuniformly distributed random numbers, i.e. numbers distributed according to a constantprobability function in the interval [0, 1]. This is the problem on which we now focus ourattention.

f(x)

x

f(x)

x

Figure 1: Left: uniform distribution function. Right: histogram obtained from a finite sequenceof uniformly distributed random numbers (i.e. numbers distributed according to the functionon the left).

First we illustrate the method of congruences, using integer arithmetics. We first notethat the digits in the product of these two integer numbers,

12345× 65539 = 809078955

look random. If we truncate the last five digits, we get

08090|78955→ 78955× 65539 = 5174631745

51746|31745→ 31745× 65539 = 2080535555

20805|35555→ 35555× 65539 = ...

6

Page 7: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

The number 12345 is called the seed, while the number 65539, which is used as a constantfactor to multiply in all steps is called the base. If we divide the results of these successivetruncation operations by 105, we obtain the following sequence of numbers

0.78955, 0.31745, 0.35555, ...

which are apparently random; actually, since we have obtained the numbers through awell-defined sequence of steps (an algorithm), they are called pseudo-random numbers. Ahistogram of a finite sequence of these numbers would give a plot similar to that shownon the right panel in Fig. 1. On taking N →∞, the histogram would tend to flatten outto unity in the whole interval.

0 2147483647-2147483648

Figure 2: Range of integer numbers that can be stored in a register of a 32–bit computer.

This procedure may pose a problem when implemented on some computers. For example,in a 32-bit computer (4 bytes; each byte contains 8 bits) can only store in their internalCPU registers signed numbers between −231 and 231− 1 (see Fig. 2). Any number largerthan 231 − 1 is truncated, and can even be turned negative! For example, the product

78955× 65539 = 5174631745

is represented, in binary format, as

517463174510 = 1001101000110111010011101010000012

which is 33–bit number. On a 32–bit computer, the 32th bit (counting from the right),which is a ‘0’ in our case, indicates ‘negative sign’ (a ‘1’ would be for a positive number),but the 33th bit (the first from the left), which is a ‘1’, is lost forever. The number wouldbe stored in the register as

001101000110111010011101010000012

which, in decimal base, is 879664449 (far from the correct result, 5174631745). An evenmore dramatic example is

51326× 65539 = −931112582

In order to avoid this problem, if the number is negative we add the period, 2147483648 =2147483647 + 1 (it is necessary to use the second form, 2147483647 + 1, since again231 = 2147483648 would not be accepted by the computer).

Things are a little easier in real arithmetics. In the following a possible FORTRAN code,based on the above method (but with a base of 16807 and using real numbers), is pro-vided. These lines of code generate a random number, uniformly distributed: note that

7

Page 8: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

the single–precision variable r is a random number in [0, 1]. The variable dseed is adouble–precision number (e.g. dseed=173211d0) provided by the user.

001. !

002. ! P1: Random number generator

003. !

004. subroutine ggub(dseed,r)

005. real*8 z,d2p31m,d2pn31,dseed

006. data d2p31m/2147483647./,d2pn31/2147483648./

007. z = dseed

008. z = dmod(16807.*z,d2p31m)

009. r = z / d2pn31

010. dseed = z

011. end

The crucial lines are 008 and 009. In line 008, the function dmod multiplies by the base,divides by the number stored in d2pn31 (which plays the role of the 105 of the previousexample) and takes the remainder. Then division by d2pn31 puts the number in theinterval (0, 1). The values of the parameters are optimised to give a stable and reliablerandom number generator.

Once one knows how to generate uniformly distributed numbers, i.e. numbers distributedaccording to the probability function f(x) = 1 in the interval [0, 1], it becomes an easytask to generate numbers distributed uniformly but according to the constant (normalised)probability function f(x) = 1/(b − a) in [a, b]: we just have to generate y ∈ [0, 1] andmake x = a + (b− a)y.

2.2. How to generate random numbers distributed according to a generalf(x)

There are different methods to obtain random numbers that are distributed followinga general probability function f(x). We will review three methods.

• Acceptance–rejection method. This is a not very efficient method, since itinvolves generating two random numbers to obtain a single number distributedaccording to f(x). We limit ourselves to stating the procedure involved, withoutmentioning the basis of the algorithm. The method is based on the fact that ifwe generate two sequences of uniform random numbers, x and y, then the set ofnumbers

{x/f(x) ≤ y}is distributed according to f(x). The steps are:

1. we generate uniform numbers x and y in [a, b]

2. if f(x) ≤ y, we accept x; otherwise x is rejected

3. we go to step 1

8

Page 9: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

• Inversion method. This method uses the cumulative function:

P (x) =∫ x

adx′f(x′), P (a) = 0, P (b) = 1

Since P (x) is a monotonically increasing function, it can be inverted to give x =P−1(y). Then, if y is uniformly distributed in the interval [0, 1], then x is distributedaccording to the probability function f(x) in the interval [a, b]. The feasibility of themethod relies on the possibility of explicitely obtaining the inverse function of P (x)(we know it can be done, but it may be necessary to do it numerically, which wouldslow down the process considerably and be a disadvantage from a computationalpoint of view).

• Method of Metropolis et al. This is a very general and elegant method, althoughit has some disadvantages also. The method is of such importance in statistical andcondensed–matter physics that it deserves to be given more time than the others.Again, the theoretical basis of the method, the theory of Markov chains, will not bementioned at all, and we will focus on the algorithm and on a few tricks that haveto be followed in order to implement efficiently the algorithm.

The method involves generating a sequence of numbers,

x[0], x[1], x[2], ...

that, asymptotically, are distributed according to the probability function f(x). Toobtain this, we begin with a seed, x[0], and obtain from it a test number, whichis accepted or not as a new number in the sequence with some probability. Theprocess is repeated as many times as necessary. In each step n, the value x[n+1] isthen obtained from x[n], using the following algorithm:

– we choose a test number as

x[t] = x[n] + ζ∆x

where ζ ∈ [−1, 1] is a uniform random number, and ∆x a constant number.

– This value is accepted as a new member of the sequence of random numberswith probability

r =f(x[t])

f(x[n])

which means: we get a random number ξ ∈ [0, 1], so that

si r > ξ the test value is accepted, i.e. x[n+1] = x[t]

si r < ξ the test value is not accpeted, i.e. x[n+1] = x[n]

A few important points to bear in mind about the algorithm:

– The method generates numbers distributed according to f(x), but only asymp-totically. Therefore, we need a previous warm–up period, long enough to reachthe asymptotic regime as much as possible, and it is after this initial periodthat the random numbers can be taken as acceptable

9

Page 10: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

– The value of ∆x is usually adjusted so that approximately 50% of the testednumbers are accepted. This insures a quick convergence to the asymptoticregime (note that rejections are as equally needed as acceptances!)

– The workings of the method only rely on the ratio r = f(x[t])/f(x[n]). Inparticular, r does not depend on the normalisation of the distribution function(which is obviously constant). It is this feature that makes the Metropolis et al.algorithm so useful in applications to statistical and condensed–matter physics.

– The Metropolis et al. algorithm may be easily extended to more than onerandom variable, x = (x1, x2, ...). In this case we obtain a sequence of vectorsx[0],x[1], ..., and generate test vectors as x[t] = x[n] +ζ∆x, following exactly thesame steps as in the case of a single variable

As an example, we apply the method to obtain a sequence of random numbers withdistribution function f(x) ∝ exp(−x4) in the infinite interval (−∞,∞). This isclearly a difficult or utterly impossible task for the methods mentioned above: theinversion method does not apply since the cumulative function cannot be explicitelycalculated, let alone inverted; and the acceptance–rejection method is inefficient).

Consider the functionf(x) = ce−x

4

,

where c is a constant such that∫ ∞

−∞dxf(x) = 1.

The value of c does not need to be known to implement the Metropolis et al.algorithm; however, we will calculate c to compare the results with the normaliseddistribution function. The normalisation integral can be expressed in terms of theΓ function by making the substitution x4 = t:

∫ ∞

−∞dxe−x

4

= 2∫ ∞

0dxe−x

4

=1

2

∫ ∞

0dtt−3/4e−t =

1

2Γ(

1

4

)= 1.8128...

A FORTRAN code that implements the algorithm for this problem follows.

001. !

002. ! P2: Metropolis et al. algorithm

003. ! for f(x)=c exp(-x**4)

004. !

005. parameter (n=1000)

006. real*8 dseed,his(-n:n),h,x,rr,xt,amp,darg,anorma

007. real*4 r

008. integer i,m,j,k,n

009. h=0.1d0

010. dseed=53211d0

011. amp=0.2d0

012. m=10**6

013. nacc=0

10

Page 11: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

014. do i=1, n

015. his(i)=0

016. end do

017. x=0.2

018. do j=1, m

019. call ggub(dseed,r)

020. xt=x+(2*r-1)*amp

021. darg=xt**4-x**4

022. rr=dexp(-darg)

023. call ggub(dseed,r)

024. if(dble(r).lt.rr) then

025. x=xt

026. nacc=nacc+1

027. end if

028. k=dnint(x/h)+1

029. if(k.gt.n.or.k.lt.-n) stop ’k fuera de intervalo’

030. his(k)=his(k)+1

031. end do

032. anorma=0d0

033. do k=-n, n

034. anorma=anorma+his(k)

035. end do

036. do k=-n, n

037. write(*,*) (k-1)*h,his(k)/(anorma*h)

038. end do

039. end

The histogram is stored in vector his. The step interval is h, set to 0.1 in line009, and the amplitude ∆x is set to 0.2 in line 011 (in the variable amp. The num-ber of Monte Carlo steps is set to 106 in line 012, and in lines 014-016 the histogramis reset (since it is an accumulator, it is convenient to set it to zero at the beginning).Monte Carlo steps are done in lines 018–031. First, at test value xt (line 020) ischosen, and the probability ratio r (here rr) is calculated in line 022. The accep-tance criterion is applied in line 024, and lines 025–026 deal with the case whereacceptance results (updating the value of the random variable and adding unity tothe number of accepted moves (nacc). Note that, after line 027, x contains the newvalue, if it was accepted, or the old one, and that in either case the value of x isaccumulated in the histogram (in line 030). After all the m Monte Carlo steps havebeen done, lines 032–035 calculate the norm of the histogram (in order to normaliseit), and in lines 036–038 results are dumped to the screen.

In Fig. 3 the resulting histogram is shown. The value of ∆x (amp in the code)can be adjusted so that the acceptance percentage oscillates about 50% (this can bedone in the warm–up period, by computing the acceptance rate along the simulationrun and changing ∆x according to how this ratio varies: if less than 50%, ∆x isdecreased, thus increasing the acceptance ratio, and the other way round); this isnot implemented in the present code. In the figure calculations are presented for

11

Page 12: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-2 -1 0 1 2x

x

x

(a) N=103

(b) N=104

(c) N=107

f(x)

f(x)

f(x)

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-2 -1 0 1 2

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-2 -1 0 1 2

Figure 3: Histogram for the distribution function f(x) = ce−x4, x ∈ (−∞,∞), as obtained from

the Metropolis et al. algorithm. (a) N = 103, (b) N = 104 and (c) N = 107 random numbers.

three values of the number of steps, N = 103, 104 and 107. In the first case thedistribution is still far from the real distribution; we would still be in the warm–up period. At least an order of magnitude more steps are necessary to enter theasymptotic regime. Then we can conclude that at least 104 steps are necessary inthe warm–up period; these initial steps are not to be included in the final period,where we can use the sequence of random numbers to do whatever we intend to dowith them (either construct a histogram, as in the present case, or evaluate an inte-gral, or something else). Of course, if we do not know the real distribution function(which is normally the case) other criteria must be used to ascertain whether theasymptotic regime has been reached or not.

2.3. Applications in statistical physics

Problems in statistical and condensed–matter physics are formulated in terms of mod-els that generally involve a huge number of independent variables. There are two basicproblems, thermal and non–thermal, which normally require different treatments. Non–thermal problems are generally easier to deal with, since the Boltzmann distributionassociated with a temperature is not involved. We will begin with a few examples of

12

Page 13: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

non–thermal problems, such as the random walk model in a number of versions and thepercolation model, with a view to showing some typical techniques involving the genera-tion of uniform random numbers.

2.3.a. Random walk (RW)

This model is used to describe a number of problems in statistical physics, for exam-ple, it can be used to characterise Brownian motion, ot the global properties of a polymermacromolecule in solution in the limit of high flexibility. Although the model can beformulated in many different ways, we will use a discrete two-dimensional square lattice.

03

4

2

1

Figure 4: Central point (origin), marked with ‘0’, from which a random walk is generated as asequence of N steps. In the first step a random direction (either 1, 2, 3 or 4) is chosen; the newpoint is used to again generate another move in the same way. After N steps we get a chain ofN segments: a random walk of N steps.

Consider the lattice represented in Fig. 4. We start from an origin, set at some particularnode of the lattice and labelled with ‘0’ in the figure. We generate a random walk (RW)of N steps by first defining the four unit vectors

v1 = (+1, 0), v2 = (0,+1), v3 = (−1, 0), v4 = (0,−1)

and then using the following algorithm:

1. set r0 = (0, 0) and n = 0

2. choose an integer random number mn from the set {1, 2, 3, 4}

3. make rn+1 = rn + vmn

4. if n = N , set R = rn; otherwise we go back to step 2

The vector R connects the origin with the last point generated in the random walk.Clearly, if we generate two different random walks, the vector R will be different. Fig 5shows three different realisations of a random walk with N = 500 steps.

13

Page 14: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

For a lattice with a coordination (number of neighbours) z the number of different N–steprandom walks is

ZRW

N = zN

If the random walk were a representation of a polymer chain, ZRWN would be the partition

function of the polymer (assuming the configurations of the polymer had zero energy).

-30

-20

-10

0

10

20

30

-30 -20 -10 0 10 20 30-30

-20

-10

0

10

20

30

-30 -20 -10 0 10 20 30-30

-20

-10

0

10

20

30

-30 -20 -10 0 10 20 30

Figure 5: Three realisations of a random walk with N = 500 steps.

Let us now look at the statistical behaviour of the distance between the origin and thelast point generated, i.e. the end-to-end distance of the random walk |R|. We would liketo calculate the average 〈|R|2〉 over a large enough number of RW chains. We have:

⟨R2⟩

=⟨|R|2

⟩=

⟨∣∣∣∣∣N∑

n=1

vmn

∣∣∣∣∣

2⟩=

⟨N∑

n=1

|vmn |2⟩

+

⟨N∑

n=1

N∑

p = 1n 6= p

vmn · vmp⟩

By definition |vmn |2 = 1 and, since moves are independent and therefore uncorrelated,

⟨N∑

n=1

N∑

p = 1n 6= p

vmn · vmp⟩

= 0

We then have⟨R2⟩

RW= N, rN ≡

√〈R2〉

RW=√N

where rN is a way to define an end-to-end distance. This result holds in the limit N →∞.

The following code generates a number of random walks of different lengths.

001. !

002. ! P3: Random walk

003. !

14

Page 15: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

004. implicit real*8(a-h,o-z)

005. real*8 dseed

006. real*4 r

007. dimension i(4),j(4)

008.

009. data i/1,0,-1,0/

010. data j/0,1,0,-1/

011.

012. dseed=51323d0

013. Ntot=20

014. M=10**6

015. ar2=0

016.

017. do l=1,M

018. x=0

019. y=0

020. do n=1,Ntot

021. call ran (dseed,r)

022. k=4*r+1

023. x=x+i(k)

024. y=y+j(k)

025. end do

026. r2=x**2+y**2

027. ar2=ar2+r2

028. end do

029. write(*,’(’’N, M, <r2>=’’,2i8,f12.5)’) Ntot,M,ar2/M

030.

031. end

The vectors i(4), j(4) give the xy coordinates of the four possible directions the RWcan take from a given lattice site. The length of the RW chain is stored in variable Ntot.Variable M is the number of configurations for the chain generated. ar2 is an accumulatorfor the end-to-end distance of the RW squared, and is set to zero on line 015. The mainloop, over different realisations of the RW, is started on line 017. In lines 020–025 aparticular chain is generated. The square of the end-to-end distance of that chain is thencomputed on line 021, and on the following line accumulated. Results are then presentedon the screen.

With this code, or with slight modifications thereof, the following results were obtained.Fig. 6(a) represents the value of 〈R2〉 /N with respect to the number of chains generatedfor a value N = 20. It can be seen that the results tend to unity as M , the number ofchains, increases. In Fig. 6(b) the value of 〈R2〉 is plotted with respect to N , the lengthof the RW. We can also see how the results tend to confirm the theoretical prediction that〈R2〉 increases linearly with N (the continuous line has slope one).

15

Page 16: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

0.6

0.7

0.8

0.9

1.0

1.1

1.2

1.3

1.4

100

101

102

103

104

105

106

107

108

no. RW chains

<R >

/N2

0

50

100

150

200

250

0 50 100 150 200 250N

<R >2

(b)(a)

Figure 6: (a) Ratio of⟨R2⟩

to RW length with respect to number of chains generated. Theratio tends to unity as the latter number increases. (b)

⟨R2⟩

as a function of chain length; thenumber of chains generated was 106 for each value of N . The continuous line is a straight linewith slope equal to one.

2.3.b. Modified random walks

The random walk is a convenient model when we would like to describe step conduc-tion in solids, diffusion processes in lattices, etc. But in applications to polymer physicsthe model presents some shortcomings, namely:

• a polymer chain cannot turn back on itself

• a polymer chain cannot intersect itself

Both these problems stem from the fact that the random walk model neglects the excludedvolume between the units that make up the polymeric chain. The non-reversal randomwalk (NRRW) model corrects for the first problem, by first defining

vn±4 = vn

and then modifying step 2 of the RW for n > 1:

1. make r0 = (0, 0) and n = 0

2. choose an integer random number mn from the set {mn−1 − 1, mn−1, mn−1 + 1}

3. make rn+1 = rn + vmn

4. if n = N we set R = rn; otherwise go to step 2

The partition function is nowZNRRW

N = (z − 1)N

16

Page 17: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

To avoid the second problem, we can introduce the so–called self-avoiding random walk(SARW) model, which does not generate configurations where the chain can intersectitself. The following additional condition in step 3 of the NRRW algorithm is added:

3′ if the node rn+1 has been already visited, the process stops and we start it overagain.

The partition function of the SARW model is much more complicated; actually an analyticexpression is known only in the limit N →∞:

ZSARWN −→ Nγ−1zN

eff

N →∞ , zeff ≤ z − 1

where γ is a critical exponent, and zeff is the effective coordination number, neither ofwhich can be calculated exactly except in some particular cases; for instance, in one di-mension, zeff = z − 1, with z = 2. Once again, this is demonstration of the importance ofnumerical methods in problems of condensed–matter physics.

As an illustration of the difference between the three models, RW, NRRW and SARW,Fig. 7 gives some examples of possible chains for each model.

12

3

4

5

6

789

10

11

12

3

4

5

620

78914

1516 11

1213

10

17 18 19

12 3

4 5

6,87

9,121011

1314151619,20

18 17

RW NRRW SARW

Figure 7: Three possible chains for the RW, NRRW and SARW models. All cases were generatedusing a chain length N = 20. The labels indicate the sequence of nodes visited. In the SARWcase the same sequence of random numbers as in the NRRW case was used, but the processstopped at the 12th step because a cross path would have followed.

When N is large, the SARW algorithm becomes very inefficient since most of the attemptsto make the chain grow are rejected. In this limit, the ratio of accepted to attemptedsteps can be calculated as:

PN =ZSARW

N

ZNRRW

N−→ Nγ−1

(zeff

z − 1

)N= e

−N log z−1zeff

+(γ−1) logN

N →∞Thus, the probability of having a rejected move decreases exponentially. This exponentialinefficiency makes the algorithm impractical for N ' 100 or larger (see Fig. 8(a) for

17

Page 18: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

actual results from Monte Carlo simulation).

There are no analytical expressions for the NRRW and SARW models, due to the factthat moves are highly correlated and the mathematics of the problem becomes intractable.However, it is very easy to perform computer simulations. For instance, from Monte Carlosimulations it is known that

⟨R2⟩

SARW= N2ν , ν ' 0.59 (1)

where ν is another critical exponent. Fig. 8(b) show results from Monte Carlo simulationsusing the code below; a power law with the correct exponent is obtained.

We now present a possible FORTRAN code for the SARW algorithm.

001. !

002. ! P4: Algorithm for the SARW model

003. !

004. implicit real*8(a-h,o-z)

005. real*4 r

006. dimension i(4),j(4)

007. dimension ic(-200:200,-200:200)

008. dimension xx(0:200),yy(0:200)

009. data i/1,0,-1,0/

010. data j/0,1,0,-1/

011.

012. dseed=263d0

013. Nchains=2000

014. do Ntot=5,60,5

015. mm=0

016. ar2=0

017. 3 n=0

018. x=0

019. y=0

020. ix=0

021. iy=0

022. do i1=-Ntot,Ntot

023. do j1=-Ntot,Ntot

024. ic(i1,j1)=0

025. end do

026. end do

027. xx(n)=0

028. yy(n)=0

029. 1 call ggub(dseed,r)

030. if(n.gt.0) then

031. kk=3*r-1

032. k1=k+kk

033. if(k1.gt.4) k1=k1-4

034. if(k1.lt.1) k1=k1+4

18

Page 19: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

035. k=k1

036. else

037. k=4*r+1

038. end if

039. ix1=ix+i(k)

040. iy1=iy+j(k)

041. if(ic(ix1,iy1).eq.1) then

042. go to 3

043. end if

044. ic(ix1,iy1)=1

045. ix=ix1

046. iy=iy1

047. x=x+i(k)

048. y=y+j(k)

049. n=n+1

050. xx(n)=x

051. yy(n)=y

052. if(n.gt.Ntot) then

053. ar2=ar2+x**2+y**2

054. mm=mm+1

055. if(mm.gt.Nchains) then

056. ar2=ar2/Nchains

057. write(*,*) dlog(dble(Ntot)),dlog(ar2)

058. go to 5

059. end if

060. go to 3

061. end if

062. go to 1

063. 5 end do

064.

065. end

The strategy is to set up a square lattice with Ntot×Ntot nodes, and introduce an occu-pation matrix called ic, defined on the lattice nodes, which equals one if a given has beenvisited, and equals zero if the node has not. In this particular code the number of chainsgenerated was Nchains=2000, and the chain lengths Ntot varied from 5 to 60 (line 014opens a do loop over chain lengths). mm is a variable that contains the number of chainsgenerated for some particular length Ntot, while ar2 accumulates the end-to-end distanceof a successfully generated chain (this is set to zero on line 016). Vectors xx, yy containthe lattice coordinates of all the points that belong to the chain (this is actually not usedin the present code, but stored for other purposes in expanded versions of the code). Onlines 022–026 all the occupation variables are set to zero. The strategy is to start fromthe origin (lines 020–021, where ix, iy are the position of the current node visited, andlines 027–028) and move so as to avoid visiting the previous point (this is done using thealgorithm on lines 030–038) and then check whether the node has been visited already(lines 041–043). If not, the occupation of the node is set to unity, and the coordinatesof the node stored. n contains the number of nodes contained in the given chain that is

19

Page 20: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

being generated (line 049). Then it is checked whether the chain has already grown tothe required length (line 052), and if so its end-to-end distance is accumulated. Line 052checks if the number of chains of length Ntot has been already reached. If this is the casethe average is computed and shown on the screen (lines 056–057).

0

20

40

60

80

100

0 10 20 30 40 50

N

% a

ttem

pts

reje

cted

1

10

100

1000

1 10 100

N

N1.18

<R >2

(a) (b)

Figure 8: Monte Carlo generation of SARW chains. (a) Percentage of rejected attempts tobuild a chain as a function of chain length. The continuous line is a fit to a Gaussian function.(b) Average end-to-end distance squared for chains as a function of chain length; each point isan average over 2 × 103 chains. The theoretical power law N 2ν , with 2ν = 1.18, is representedas a continuous line.

Results are shown in Figs. 8(a)–(b). The percentage of rejected attempts to build a chainas a function of chain length N is shown in Fig. 8(a). We can see that as the chainsget longer, the efficiency of the algorithm decreases quite dramatically (actually, the datapoints in the figure have been fitted to a Gaussian function, although for large N thefunction decreases exponentially). In Fig. 8(v) the value of 〈R2〉 is represented with re-spect to the number of steps N in logarithmic scale, to better check that the exponentiallaw, with the correct exponent 2ν ' 1.18, is obtained (we have to bear in mind, however,that the chains generated for each value of N are only 2000 in number, so that averagesover chains are short, while the power law in Eqn. (1) is only correct in the limit N →∞).

2.3.c The percolation problem

Percolation is a simple geometrical problem, but the solution is far from trivial. Thesimplest form of percolation is the following: consider a square lattice with N sites, eachof which can be filled with probability p and left empty with probability 1− p. We definea cluster of size l as a group of l connected nearest neighbours. Fig. 9 shows an example.Let nl(p) be the number of clusters of size l when the occupancy probability of each siteis p, and P (p) the fraction of occupied sites (number of occupied sites with respect to the

20

Page 21: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

total number of sites N) that belong to the largest cluster.

Figure 9: Example of a possible configuration in a percolation problem. The circles representoccupied sites, while the bonds between nearest neighbours have been indicated. In this casethere is a percolating or spanning cluster that goes throughout the lattice, along with cluster ofdifferent sizes.

The single most important problem associated with percolation is the existence of a per-colation transition with respect to p, which occurs when there appears a cluster in thesystem that percolates, i.e. spans the whole system. The transition takes place whenp = pc, where pc is a critical probability. A spanning or percolating cluster is a clusterwith a size similar to that of the system, and therefore one can go from one side of thesystem to the other, in at least one direction, by just jumping from one neighbour to theother without leaving the percolating cluster. In Fig. 10 the fraction of occupied sitesthat belong to the largest cluster, P (p), is shown as a function of p, as results from asimulation on a (two-dimensional) square lattice. If the system is finite (N <∞) P (p) isa monotonically increasing function of p, but exhibits an abrupt change about pc, whichcan be associated with the percolation transition of the infinite system, N → ∞. In theinfinite system, by contrast, P (p) increases abruptly from zero with infinite derivative atp = pc, and then slowly increases up to unity (in the latter case the percolating clusterspans the whole system).

There is a simple relation between P (p) and nl(p):

l(finite clusters)

l × nl(p)

N+ p× P (p) = p

21

Page 22: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

The percolation problems can be analysed easily by means of the Monte Carlo method:we fill each site of the lattice with probability p. To do that, we visit the sites sequentially(one after the other), and for each site we generate a uniform random number ζ in [0, 1]. Ifζ < p, we occupy the site with an atom, otherwise we leave it empty. After visiting the Nsites we will have generated one configuration. By repeating the process we can generate asmany configurations as we wish, say M configurations, all of them resulting from the samevalue of p. The complicated problem is how to identify the clusters of each configuration.There are a number of different algorithms to perform this operation. Assuming we havebeen able to do this, the final step is to average over all the M configurations, and we willobtain P (p). It is known from Monte Carlo simulation that, for the (two–dimensional)square lattice, pc ' 0.59.

0

1

0 p 1c p

P(p

)

Figure 10: Qualitative behaviour of P (p) for an infinite system (continuous line) and a finitesystem (dotted line).

The percolation problem is rather similar to a second–order transition. The followingidentifications are necessary:

P (p) → order parameterG(p) =

∑l nl(p) → free energy

Then the problem has critical and scaling properties. For instance, there appear singu-larities at p = pc, which take the form of power laws:

P (p) ∼ (p− pc)β, G(p) ∼ (p− pc)2−α,

where α and β are critical exponents. It is clear that any quantitative analysis of thepercolation problem rests on the use of the computer. The problem is interesting not justfrom a theoretical point of view, but also in practical applications; for example, in dilutedmagnetic alloys, in conduction problems in disordered and inhomogeneous media and, in

22

Page 23: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

general, in all non–thermal problems that depend on the geometry of random clusters(fires in forests, petroleum extraction, etc.)

A crucial step in the percolation problem is the analysis of clusters. There are manyalgorithms to identify clusters on a lattice, some more efficient than others. Code P5 inthe course web page is an example.

2.3.d Numerical integration with many variables

Random–number generation can be easily used to estimate the value of integrals in mul-tidimensional volumes. It is precisely this problem, that of averaging in multidimensionalvolumes, that statistical mechanics is primarily concerned with. Traditional numericalintegration methods (based on the exact integration of families of orthogonal or non–orthogonal polynomials) cannot be used in multidimensional spaces. In order to under-stand the reason, let us define a square lattice in D dimensions. The number of sides ofthis hypercube is 2D. Let n be the number of nodes along any of the D axes; then wewill have 2DnD−1 nodes on the surface of the hypercube. But there are nD nodes in thebulk (the remaining space of the hypercube, i.e. not counting the surface). The ratio ofthe two is:

2DnD−1

nD=

2D

n−→ ∞

D →∞

Therefore the method becomes highly inefficient since, as D increases, nodes on the sur-face of the hypercube are sampled more and more often than nodes in the bulk.

The Monte Carlo does not present this difficulty. Suppose we would like to numericallyestimate the value of the D–dimensional integral

V Dg(x)dDx

where x ≡ (x1, x2, ..., xD) is a vector in D dimensions, dDx the corresponding volume

element, and V D is some particular hypervolume in D dimensions. Then the MonteCarlo method writes ∫

V Dg(x)dDx ' WD

M

M∑

i=1

f(xi)ζi

where WD ⊇ V D is a volume that contains the volume V D (in particular they may bethe same volume), M is the number of uniform random numbers xi that we generate inWD, and ζi is a number such that

ζi =

1, if xi ∈ V D

0, otherwise

If WD = V D (i.e. we are able to sample in V D) then ζi = 1 always. This method maybe regarded as a kind of rectangle–based method, where the rectangles have random sizes.

23

Page 24: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

This method enjoys great advantages in case the integrand includes a probability dis-tribution function, i.e. a positive-definite, normalised function f(x). To see this foreasily, we consider a one-dimensional problem. Consider the integral

I =∫ b

adxf(x)g(x) (2)

where f(x) is a probability distribution function. Then, we would evaluate the integralby simply writing

I =∫ b

adxf(x)g(x) ' b− a

M

M∑

i=1

f(xi)g(xi)

where {xi} are uniformly distributed numbers in the interval [a, b]. But let us make useof the cumulative function P (x):

y = P (x) =∫ x

adx′f(x′) → dy = f(x)dx

If we make the change in variables x→ y, we have:

I =∫ b

adxf(x)g(x) =

∫ 1

0dyg

(P−1(y)

)

But now, if we remind ourselves that, according to the acceptance-rejection method,P−1(y) is a random variable distributed according to f(x) provided y is uniformly dis-tributed in [0, 1], we can write

I =∫ b

adxf(x)g(x) ' 1

M

M∑

j=1

g(xj) (3)

where {xj} are now a set of numbers distributed according to the function f(x) in theinterval [a, b]. We will make use of this result shortly. This latter result may be directlyextended to multidimensional integrals with an integrand weighted by a distribution func-tion.

2.3.e Thermal problem: Ising model in 2D

Thermal problems are usually more complicated technically. The reason behind thisis related to the central problem of statistical physics. In a system with many degrees offreedom (for N particles say 3N degrees of freedom, save contraints) in contact with athermal bath at temperature T , the probability of each state is given by the normalisedBoltzmann factor:

P ∝ e−βH(q,p)

where

q ≡ {q1, q2, ..., q3N}, p ≡ {p1, p2, ..., p3N}

and β = 1/kT . We know from statistical physics that this distribution is highly peakedabout a few typical configurations; in fact, one way to obtain the Boltzmann factortheoretically is to assume that only one configuration contributes, i.e. that configuration

24

Page 25: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

Figure 11: A possible configuration of a 2D–Ising model of 1/2 spins on a square lattice.

that maximises the degeneracy. In a finite system the distribution is wider, but eventhen its width is narrow compared to the mean. Now uniform sampling will produceconfigurations that, in most cases, will have a negligible statistical weight (i.e. smallBoltzmann factor), which will give rise to very inefficient (or even helpless!) simulations.

To illustrate this important point, we will use an Ising model in 2D. Remember that theIsing model is formulated on a lattice, in our case chosen as a square lattice, on everynode of which we place a 1/2–spin, i.e. pointing either up or down (Fig. 11). The variableassociated to the i–th node is si = ±1, with +1 meaning spin up and −1 spin down. TheHamiltonian (energy) of the system is

H({si}) = −J∑

nn

sisj −B∑

i

si

where J (here assumed positive) is a (ferromagnetic) coupling constant, and B an exter-nal magnetic field. nn stands for nearest neighbours, i.e. each spin interacts with theirfour neighbours (up, down, right, left). {si}, hereafter labelled simply as s, representsa configuration of N spins, i.e., s = {s1, s2, ..., sN}. At zero magnetic field, B = 0, thismodel presents a phase transition from an ordered phase, with average magnetisation perspin m 6= 0 for T < Tc, to a disordered phase where m = 0 above Tc. It is a second-orderphase transition, with a critical point at (Bc, Tc) = (0, 2....).

Our aim is the computation of thermal averages of quantities that can be defined foreach configuration s of the N spins; two obvious examples are the energy and the mag-netisation. Let A(s) be one of such quantities. Then the statistico-mechanical expressionfor the thermal average is

〈A〉 =∑

s

A(s)P (s) =

s

A(s)e−βH(s)

s

e−βH(s)(4)

In principle, the sums∑s are extended over all possible configurations of the N spins,

25

Page 26: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

which are 2N in number. Even for very small N , the number of configurations is as-tronomically large. For example, for N = 104 (a rather modest system on macroscopicgrounds),

2N = 2104 ' 103010.3

It is therefore out of question to sum over all configurations, and we should choose areduced set; the obvious solution is to randomly select the configurations to be includedin the sum, but in view of the presence of the weighting factor P (s), an importancesampling scheme is in order. Using the same steps that led to (3) from (2), the average(4) would be

〈A〉 =1

M

s

A(s), (5)

where M is the number of configurations generated. If, instead, we were to choose a setof random uniform configurations, we would find that most of them would have negligibleprobability P (s) and would contribute negligibly to the sum (4). Since this is an inter-esting and illustrative point, let us digress a little in this direction.

We will perform the above average, Eqn. (4), by two methods:

• Uniform sampling. Here we obtain configurations by assigning spins to the nodesof the lattice using the following algorithm:

1. We choose a uniform random number ζ ∈ [0, 1]

2. If ζ < 0.5, we set si = +1; otherwise, we set si = −1

3. We repeat for all spins i = 1, 2, ..., N

In this way we generate completely random configurations.

• Importance sampling. The algorithm to be used here will be the Metropolis et al.version, already introduced previously as a method to generate random deviatesdistributed according to a particular probability distribution. The particular im-plementation for the problem at hand will be explained in detail later. For themoment we limit ourselves to presenting the results in comparison with those fromthe previous uniform-sampling scheme.

Fig. 12 shows two energy histograms, f(e), with e = E/NJ the reduced energyper spin, each obtained according to the two above sampling methodologies. Thehistograms are constructed by computing the energy of each sampled configuration,and assigning the energy to a given energy subinterval along the energy axis (thedistributions are normalised to unity).

The reduced temperatures were set to kT/J = 2.1 (upper panel) and 2.5 (lowerpanel), which happen to be respectively below and above the critical temperatureTc of the phase transition (kTc/J ' 2.269). In the first case the equilibrium phaseis therefore an ordered phase with non–zero magnetisation, whereas the second cor-responds to a phase with zero magnetisation. The histogram based on the uniformsampling method has zero mean and a small variance (as compared to the mean),

26

Page 27: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

0

1

2

3

4

5

6

7

0

1

2

3

4

5

6

-2 -1.5 -1 -0.5 0 0.5 1e

f(e)

kT/J=2.1

kT/J=2.5

Figure 12: Energy distribution of a two-dimensional Ising model at two different reduced tem-peratures (indicated as labels), below (upper panel) and above (lower panel) the critical tem-perature (kTc/J ' 2.269). Energy e = E/NJ is expressed per spin and in units of the couplingconstant J . The shaded histogram was obtained by the uniform sampling method, whereas theother corresponds to the true thermal distribution as obtained from Monte Carlo simulationusing importance sampling.

whereas that based on importance sampling has a mean correctly located at a neg-ative value (we have to bear in mind that, in the perfectly ordered state –T = 0–,the reduced energy per spin would be E/NJ = −2). The variance (i.e. width)of both distributions are a little different, although both should be ∼ 1/

√N (here

N = 20× 20, which is a small number). What is remarkable is the negligible over-lap between both distributions, the thermal and the uniform distributions. Uniformsampling is not generating any single significant configuration at all!

In the case where the temperature is above the phase transition temperature (T >Tc), the same situation holds, although this time the thermal distribution extendsover a much wider range of magnetisations, the mean one being zero, as is the casewith the uniform distribution. However, the latter misses a lot of configurationswith importante statistiscal weights.

As far as the distribution in magnetisation is concerned, Fig. 13 shows the distribu-tions as obtained from the two sampling techniques. Note that, for the temperaturekT/J = 2.1, the temperature is below the critical temperature, and therefore two(ideally symmetric) peaks appear, located at values of magnetisation differing insign (due to the short length of the Monte Carlo simulation with importance sam-pling, the peaks are not completely symmetric). Again, the histogram obtained fromthe uniform-sampling method does not overlap with the real, thermal distributions.The average magnetisation obtained from this method would be zero, whereas we

27

Page 28: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7

-1 -0.5 0 0.5 1m

f(m)

kT/J=2.1

kT/J=2.5

Figure 13: Magnetisation distribution of a two-dimensional Ising model at two different re-duced temperatures (indicated as labels), below (upper panel) and above (lower panel) thecritical temperature (kTc/J ' 2.269). Magnetisation m = M/N is given per spin. The shadedhistogram was obtained by the uniform sampling method, whereas the other corresponds to thetrue thermal distribution as obtained from Monte Carlo simulation using importance sampling.

know that, since we are below the critical temperature, the equilibrium magnetisa-tion is non-zero. Note that, strictly speaking, the importance-sampling distributionalso has zero mean, because it is bimodal and symmetric, but the correct averagemagnetisation is the mean of one of the peaks1! This example demonstrates that itis critical to use importance sampling in statistical simulations of condensed–mattersystems.

2.4. Monte Carlo code for the 2D Ising model

Now that we have introduced the Ising model in 2D and discussed the significance ofusing an importance-sampling technique to generate configurations for performing theaverages, we will discuss how to implement the Metropolis et al. algorithm for the Isingmodel. Again, we repeat that this method is the only one feasible in statistical mechanics,since it does not require knowledge of the partition function (norm of Boltzmann weight),which cannot be calculated.

Before discussing the code, we give a few general comments on the basis of the method,which can also be extended to other applications.

• Initialisation. Ideally one should start from an initial configuration which is repre-

1For temperatures T well below the critical temperature this effect does not appear since any reason-ably long simulation, even long enough to correctly explore configurations with, say positive magnetisation–this will depend on the initial configuration–, will not be that long as to explore states with oppositemagnetisation, and correct means will result.

28

Page 29: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

-1

-0.5

0

0.5

1

-1

-0.5

0

0.5

1

5 10 15 20 25 30 35 40 45 50

MC time

M/N

M/N

kT/J=2.5

kT/J=1.8

Figure 14: MC time evolution of magnetisation per spin, M/N , at two different tempera-tures: kT/J = 1.8 (upper panel), and kT/J = 2.5 (lower panel). In both cases the startingconfigurations were chosen as all–up, random and all–down.

sentative of the thermodynamic conditions. This is not always possible. In the Isingmodel, we may start from all–up, all–down, or random configurations. Clearly, ifthe temperature is below the critical temperature, then the first two configurationsare to be preferred. Fig. 14 (upper panel) shows how the magnetisation per spinevolves in “MC time” or “MC steps” (see later) for kT/J = 1.8: the initial randomconfiguration, with very small magnetisation per spin, evolves towards configura-tions with high magnetisation, which are more representative of the equilibriumstate, but it takes some time for the Monte Carlo algorithm to reach this situation(typically we have to wait for a relaxation time τ , which will become longer as thetemperature is decreased). All–up or all–down configurations, on the other hand,are more representative of the equilibrium state since they have a higher Boltzmannfactor, and they seem to converge rapidly towards equilibrium. For kT/J = 2.5(Fig. 14, lower panel), which corresponds to the disordered phase, it is now the all–up and all–down configurations the ones that are not representative, so that theyshow a slow relaxation towards equilibrium (the initial random configuration alsoshows slow relaxation; this is due to the close proximity of the phase transition).

• Warming up. As we know, the Metropolis et al. algorithm tends to generatecorrectly distributed configurations, but only asymptotically. A warm–up period isneeded. Depending on the “quality” of the initial configuration, this period may beshort or long (for example, we may have a look at how the magnetisation or theenergy are distributed, and wait until Gaussian functions are obtained).

29

Page 30: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

• Trial moves. In the usual Monte Carlo method for the Ising model, spins aremoved one at a time. Normally one visits the spins sequentially, and moves thevisited spin (from up to down or the other way round). Say we are on the i–th spin,with variable si. Then the change of energy involved in turning the spin is

∆E = 2si∑

n.n.

sj

where n.n. are the four neighbour spins (at right, left, on top and bottom). Theprobability with which this single–spin move is accepted is taken as exp (−β∆E).After we have attempted to turn the N spins, we have one MC step, so that Msteps involve M attempts per spin. Trial moves may involve more than one spin(for instance, blocks of spins). However, this methodology may not work, as energychanges involved are proportional to the number of spins involved in the move, andthe corresponding acceptance probability may be too small.

• Acceptance criterion. In the Ising model there is only one possibility to “move”the spin. In other models (e.g. models with continuous variables) we have a rangeof possible moves to generate a ‘test’ value for the random variables. In this lattercase we have to adjust the possible moves so as to have an acceptance ratio ofapproximately 50%, since this ratio optimises the rate at which equilibrium (i.e.the asymptotic regime) is reached.

001. !

002. ! P6: 2D-Ising model Monte Carlo with

003. ! importance sampling

004. !

005. parameter (n=20)

006. integer*1 is(n,n)

007. real*8 dseed

008. dimension histm(101),histe(101)

009.

010. data dseed /186761d0/

011. nsteps = 100000000

012. nblock=10

013.

014. n2 = n*n

015. T=2.1

016.

017. ama = 0

018. amaa = 0

019. ene = 0

020. enea = 0

021.

022. do i=1,101

023. histm(i)=0

024. histe(i)=0

025. end do

026.

30

Page 31: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

027. do i = 1, n

028. do j = 1, n

029. is(i,j)=1

030. call ran (dseed,random)

031. if(random.lt.0.5) is(i,j)=-1 ! comment to set ordered state

032. ama = ama + is(i,j)

033. end do

034. end do

035.

036. do i = 1, n

037. ip = i + 1

038. if (ip .gt. n) ip = 1

039. do j = 1, n

040. jp = j + 1

041. if (jp .gt. n) jp = 1

042. ene = ene - is(i,j)*(is(ip,j)+is(i,jp))

043. end do

044. end do

045.

046. do k = 1, nsteps/nblock

047. do l = 1, nblock

048.

049. do i = 1, n

050. do j = 1, n

051. sij = is(i,j)

052. ip = i + 1

053. if (ip.gt.n) ip=1

054. im = i - 1

055. if (im.lt.1) im=n

056. jp = j + 1

057. if (jp.gt.n) jp=1

058. jm = j - 1

059. if (jm.lt.1) jm=n

060. de = 2*sij*(is(ip,j)+is(im,j)+is(i,jp)+is(i,jm))

061. call ran (dseed,random)

062. if (exp(-de/T) .gt. random) then

063. is(i,j) = -sij

064. ama = ama - 2*sij

065. ene = ene + de

066. end if

067. end do

068. end do

069.

070. end do

071.

072. enea = enea + ene

073. amaa = amaa + ama

31

Page 32: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

071.

071. im=(ama/n2+1)*0.5e2+1

071. histm(im)=histm(im)+1

074.

075. ie=(ene/n2+2)*50.0+1

076. histe(ie)=histe(ie)+1

077. end do

078.

079. write(*,’(’’T= ’’,f5.3,’’ M/N= ’’,f5.3,’’ E/N= ’’,f6.3)’)

080. >T, abs(amaa/(nsteps/nblock*n2)),enea/(nsteps/nblock*n2)

081.

082. open(3,file=’histe.dat’)

083. sum=0

084. do i=1,101

085. sum=sum+histe(i)

086. end do

087. do i=1,101

088. emm=2*(i-1)/100.0-2

089. write(3,’(2f9.3)’) emm,histe(i)/sum*50.0

090. end do

091. close(3)

092.

093. open(3,file=’histm.dat’)

094. sum=0

095. do i=1,101

096. sum=sum+histm(i)

097. end do

098. do i=1,101

099. amm=2*(i-1)/100.0-1

100. write(3,’(2f9.3)’) amm,histm(i)/sum*50.0

101. end do

102. close(3)

103.

104. end

Values of the spins are stored in the two-index matrix is(n,n), where n is the num-ber or rows and columns of the lattice (so that n2, defined on line 014 is the number ofspins). The temperature T is set on line 015. Accumulators for magnetisation and energy,amaa and enea, are set to zero, as well as variables for the magnetization and energyhistograms (lines 022–025). On lines 027–034 the spins are assigned, either in all-up con-figuration or randomly (this is changed simply by commenting or uncommenting line 031),and the magnetisation of this initial configuration is computed and stored in variable ama.Lines 036–044 compute the energy of this configuration. On lines 046 and 047 the loopover MC steps is initiated. The loop is divided into blocks of nblock steps, so that thereare nsteps/nblock blocks (nsteps is the number of total MC steps in the simulation).This is made so that quantities can be accumulated not every single MC step but everynblock steps to avoid correlation effects between configurations. On lines 049-068 spins

32

Page 33: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

are chosen sequentially, one at a time, and switched from up to down or the other wayround. The change of energy involved is computed in de, on line 060, and on line 062the move is either accepted or rejected depending on the Metropolis et al. criterion. Ifaccepted, the magnetisation and energy of the new configuration are computed on lines064 and 065, respectively. Lines 072–073 accumulate these quantities every block, andalso the histograms are built on lines 071–076. The end do instruction on line 077 endsthe main loop over MC blocks. After that, results are written; note that, whereas thetemperature, magnetisation per spin and energy per spin are dumped on the screen (lines079 and 080), the histograms are written on files histe.dat and histm.dat for energyand magnetisation, respectively. Also note that the ‘warm-up’ stage is not included inthis version of the code.

The results obtained from running this code are shown in Figs. 15. In Fig. 15(a) theevolution of the magnetisation per spin and the energy for some particular experiment(with an all–up initial configuration) is displayed. The temperatures are kT/J = 1.80 and2.19. The second one is particularly interesting, as it is close to the critical temperature(but still below Tc), and it shows that the system visits configurations with positive andnegative values of M/N . The system has been “caught in the act”: in a matter of 30MC steps all the spins are flipped so that M/N changes sign. This behaviour is typicalof states close to critical points.

-2.0

-1.5

-1.0

-0.5

0.0

0.5

1.0

0 100 200 300 400 500 600 700 800

M/N

E/N

MC steps

kT/J=1.80

kT/J=2.19E/N

, M/N

-2.0

-1.5

-1.0

-0.5

0.0

0.5

1.0

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5

M/N

E/N

kT/J

E/N

, M/N

(a) (b)

Figure 15: (a) MC time evolution of magnetisation per spin, M/N , and energy per spin, E/N ,at two different temperatures. (b) Phase diagram (M/N vs. T ) and dependence of energy perspin, E/N , on T .

Fig. 15(b) is the phase diagram (M/N vs. T ). The number of MC steps used to obtainthe magnetisation for each value of the temperature was 107. We obtain a smooth functionof the temperature: due to the finite size of our system (N = 20× 20 spins) the disconti-nuity in the derivative of M/N at the critical temperature is smoothed out. The criticaltemperature can be obtained using some practical criterion (e.g. looking for the temper-ature for which M/N = 0.5), but there are rigorous ways to extract information fromthe MC data (for example, the method based on cumulants) and obtain more meaningfulinformation, such as the value of Tc. The energy per spin, E/N , is also plotted in thefigure. Note that it is a smooth function of temperature, and always negative, even in the

33

Page 34: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

disordered phase; this means that there is some degree of local ordering even in this phase.

2.5. Model with continuous variables: hard spheres

Now we would like to illustrate the Monte Carlo technique in the context of a thermalproblem with continuous degrees of freedom. We have chosen the hard–sphere model,since it is both simple and sufficiently important. The model has played a crucial role inthe development of the statistical mechanics of fluids and solids, and as a consequence, itis a central model in condensed–matter physics.

The model approximates the interaction potential between two spherical molecules, φ(r),by:

φ(r) =

∞, r < σ

0, r > σ

σ is the diameter of the hard spheres; these are termed hard because they cannot penetrateeach other. r is the distance between the centres of mass of the two spheres. The modelis an attempt to approximate the repulsive part of the interaction potential in simplesystems (spherically symmetric molecules, either in reality or effectively), for instancenoble gases, some metals, CH4, NH3, ... It is an athermal model (i.e. thermodynamicproperties depend trivially on temperature) since the corresponding Boltzmann factordoes not depend on temperature T :

e−βφ(r) =

0, r > σ (spheres overlap)

1, r > σ (spheres do not overlap)

Therefore, the only allowed configurations of N spheres are configurations where there isno overlap between the spheres; all these configurations have the same statistical weight,and possess zero energy.

Despite its ‘trivial’ appearance, the hard–sphere model contains highly non–trivial physics.The only relevant parameters are density ρ = N/V and pressure p. The former is moreconveniently represented by the so–called packing fraction, which is the ratio of volumeoccupied by the N spheres to total volume V . If v = (4π/3) × (σ/2)3 = πσ3/6 is themolecular volume, then the packing fraction, η, is:

η =Nv

V= ρv =

π

6ρσ3

In terms of η , the model has (Fig. 16):

• A fluid phase in the interval η ∈ [0, 0.494], with long–range positional disorder (thereis, however, short–range order; this will be discussed later on).

• A crystalline phase, in the interval η ∈ [0.545, 0.740], with long–range positionalorder. The crystal is a fcc (face-centred cubic) lattice. Note that the maximumpossible value of the packing fraction is 0.740.

34

Page 35: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

Figure 16: Phase diagram of hard spheres in terms of the packing fraction.

• An amorphous (metastable) phase, in the interval η ∈ [0.59, 0.64], with long–rangepositional disorder but a negligible viscosity coefficient and virtually zero diffusion.

There is strongly first–order phase transition between the fluid and the crystal, at a re-duced pressure p∗ ≡ pσ3/kT ' 8.9

Before presenting the Monte Carlo algorithm as implemented for hard spheres, we in-troduce a method to calculate the pressure, along with two simple theories, applicable tothe fluid and the crystal, respectively, that describe in simple (albeit approximate) termsthe equation of state of the system, i.e. the function p(ρ). In the theory for the crystalphase calculations are most easily done using the (unweighted, i.e. uniform sampling)Monte Carlo integration technique; in the former, it may also be necessary to use it incase refinements be required.

• Pressure and radial distribution function. The radial distribution functiong(r) is a function of the distance between a pair of particles. It is a measure ofthe local order of the fluid. In practice it is computed as the mean number ofparticles around a given one at distance r, divided by the expected number shouldinteractions be turned off (the latter is just the bulk density ρ). For a dense fluid,the function shows a strong first peak at distances ∼ σ, and a damped oscillatorystructure tending to one as r increases. As the density is increased, this structurebecomes more and more sharpened. Knowledge of g(r) for a fluid allows differentthermodynamic properties to be computed. For example, the reduced pressure isgiven by

p

ρkT= 1− 2πρ

3kT

∫ ∞

0drr3φ′(r)g(r) (6)

where φ′(r) is a derivative with respect to r. This is valid for any sphericallysymmetric interaction potential. In particular, for the hard–sphere potential (whereφ′(r) exhibits delta–type behaviour at r = σ), the above expression can be writtenin terms of the value of g(r) at r = σ (the so–called contact value):

p

ρkT= 1 +

2πρ

3σ3g(σ+)

35

Page 36: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

In the simulation, g(r) is computed as a histogram; we will explain this later.

• Clausius equation of state for the fluid. We start from the ideal–gas equationof state, which we known is exact in the low–density limit ρ = N/V → 0:

pV = NkT

Here p is the pressure, V the volume, N the number of particles, k Boltzmannconstant, and T the temperature. At higher density the volume occupied by thespheres has to be taken into account, and the equation must be modified as

p(V −Nb) = NkT → p =NkT

V −Nbwhere b is the excluded volume per particle. This volume arises from the fact thateach sphere has an associated volume within which the centres of mass of otherspheres are not allowed. Every pair of spheres contributes to this volume with

b =1

2×(

3σ3)

=2π

3σ3 = 4v

Then:pV

NkT=

1

1−Nb/V =1

1− 4η

which is Clausius equation. This equation predicts the following virial expansion:

pV

NkT=

1

1− 4η= 1 + 4η + 16η2 + ...

The coefficient of the term η2 is incorrect; actually its correct value is 10. Thepredicted higher–order coefficients are also in error. In fact, the virial coefficients,B∗n = Bn/v

n−1, defined by the virial expansion

pV

NkT= 1 +

∞∑

n=1

Bn+1ρn =

∞∑

n=1

B∗n+1ηn

can be obtained from Clausius equation as B∗n = 4n; scaling with the second virialcoefficient, we get

Bn

Bn−12

= 1, ∀n > 1

which is a very gross approximation. In particular, Clausius equation predicts adivergence of the pressure at a packing fraction η = 1/4 = 0.250, which is clearlywrong since we know that the fluid is stable at least up to η = 0.494. Clausiusequation is approximate, and can be used only at low densities.

The free energy can be obtained from the pressure by integration; the final result is

F

NkT= log

(ρΛ3

1− 4η

)− 1 (7)

Clausius equation can be improved in various ways. Much effort has been devotedover the last decades to this goal. A typical approach incorporates knowledge of

36

Page 37: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

higher-order virial coefficients to build up a more accurate equation of state. One ofthese approaches leads to the so–called Carnahan–Starling equation of state which,when properly integrated in density, gives the following free energy per particle:

F

NkT= log

(ρΛ3

)− 1 +

(4− 3η)η

(1− η)2(8)

A more general approach involves constructing Pade approximants. For example, ifthe first two virial coefficients, B2 and B3, were known, the coefficients a1 and a2 inthe following Pade approximant,

pV

NkT=

1 + a1ρ+ a2ρ2

1− ρv = 1 +B2ρ +B3ρ2 + ...

can be easily obtained. The approximant is only adjusted for low densities, but itsfunctional form is expected to be valid for a much larger density range.

Expressions for the virial coefficients, based on statistical mechanics, can be written.They include the so–called Mayer function, f(r) = exp [−βφ(r)]− 1. The first twocoefficients read:

B2 = −1

2

∫drf(r)

B3 = −1

3

∫dr∫dr′f(r)f(r′)f (|r− r′|) (9)

Expressions for the higher-order coefficients are considerably more complicated. Forhard spheres, the first three coefficients are known exactly:

B2 =1

2

∫drΘ(σ − r) = 2π

∫ σ

0drr2 =

3σ3 = 4v

B3 =5π2

18σ6 ' 2.74156

B4 =

[− 89

280+

219√

2

2240π+

4131

2240πarccos

(1√3

)]B3

2 (10)

The higher-order coefficients, Bn with n > 4, must be calculated numerically. Note,in passing, that Clausius’ equation predicts B3 = 16v2 = 16 × (πσ3/6)2 = 4.3865,to be compared with the exact value 2.7416; not a fair comparison really!

As an example of how the higher-order virial coefficients can be evaluated numer-ically, let us evaluate the B3 coefficient (which, as mentioned, is known exactly)using a technique based on the multidimensional Monte Carlo integration. We useEqn. (9), which extends over all space, but only when three spheres overlap at thesame time does the integrand contribute. In spherical coordinates:

B3 = −1

∫ σ

0drr2

∫ 1

−1d (cos θ)

∫ 2π

0dφ

×∫ σ

0dr′r′2

∫ 1

−1d (cos θ′)

∫ 2π

0dφ′f(r)f(r′)f (|r− r′|)

37

Page 38: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

M BMC3 BMC

3 /Bexact3

102 2.51933 0.91894103 2.65935 0.97002104 2.77402 1.01184105 2.74633 1.00174106 2.74411 1.00093107 2.74242 1.00032108 2.74170 1.00005

Table 1: Third virial coefficient as obtained from the Monte Carlo integration method fordifferent number of configurations generated, M , and ratio to the exact value, given by Eqn.(10).

Note that the jacobians in θ coordinates have been absorbed into differentials, sothat it is cos θ that is to be sampled uniformly. Let us place a sphere at the origin,and choose a location for a second sphere randomly so that it overlaps with the first.This is easily done if we randomly and uniformly generate numbers (r, cos θ, φ) inthe intervals [0, σ] × [−1, 1] × [0, 2π]. Then, one of the Mayer functions, say f(r),is always equal to minus one. If we do the same for a third sphere, i.e. randomlygenerate a position so that it always overlaps with the first, we have f(r ′) = −1.Now there would remain to check whether the second and third spheres overlap, i.e.whether the value of the Mayer function f (|r− r′|) is 0 or −1. The approximationbased on Monte Carlo integration would, in this case, be

B3 =1

3× 1

M[(σ)× (2)× (2π)]2

M∑

i=1

r2i r′2i ξi =

(4πσ)2

3M

M∑

i=1

r2i r′2i ξi

where i = 1, ...,M is a Monte Carlo step (which includes sampling two spheres withinthe excluded volumen of the central one) and ξ = 1 or 0 depending on whether thetwo spheres overlap. Table 1 contains results obtained with different number of MCsteps generated. As can be seen, the results for B3 converge nicely to the exactvalue. This technique is obviously useful beyond the fourth virial coefficient, forwhich there are no exact results.

• Free–volume equation for the crystalline phase. For the other stable phase,the crystal, a simple approximation, called free–volume approximation, can be made.Let us consider a crystalline lattice, with the spheres vibrating about their equilib-rium positions, which coincide with the sites of the lattice. The free–volume approx-imation assumes the neighbours of a given sphere to be fixed in their equilibriumpositions, and that our central sphere moves in the ‘cage’ left by the neighbours. Theproblem is then that of an ideal particle in an external potential; the latter is infinitewhenever the particle overlaps with the neighbours, and zero otherwise. This givesrise to a free volume, vf , and the partition function of the particle will be z = vf .The partition function for the N spheres (which we assume to be distinguishable,

38

Page 39: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

since a particular sphere has to be chosen to calculate vf) will be

Z =1

h3N

N∏

i=1

vfdri

∫dpi =

vNfΛ3N

where Λ is the thermal wavelength (this comes from the integration over momenta).The free energy will be:

F = −kT logZ = −NkT log(vfΛ3

),

F

NkT= log

(Λ3

v3f

)

and the pressure:

p = −(∂F

∂V

)

The dependence of F on V (or density ρ) comes through vf , which depends on howclose or how far the neighbours of the central sphere are, which in turn depends ondensity.

To calculate vf we can make approximations, or we can use the Monte Carlo in-tegration method, in the same manner as we used it to compute the third virialcoefficient B3. In Fig. 17 the situation is explained on a triangular lattice. Sixneighbours of a central sphere (the latter is not shown) are depicted in light grey.The region where the centre of mass of the central particle could be located withoutever overlapping with its neighbours is also depicted (the region is bounded by cir-cular sectors, each centred on each of the six neighbours and with radius 2σ). On atwo-dimensional triangular lattice the free volume vf can be obtained analytically.The three-dimensional case requires considerably more effort, so that Monte Carlointegration is well suited to the problem: we would generate positions for the centralparticle within a volume W that contained the volume vf we want to calculate; anypoint generated such that there is overlap with any of the neighbouring spheres isrejected (or given zero weight). The free volume would be:

vf 'W

M

M∑

m=1

ξi, ξi =

1, no overlap

0 overlap

We have to make sure that the volume W is sufficiently large as to include the freevolume. The quantity ξi is easily calculated by simply generating the positions ofthe n nearest neighbours of the lattice in question. For example, for the fcc latticen = 12, and their positions are given by:

(±a

2,±a

2, 0),

(±a

2, 0,±a

2

),

(0,±a

2,±a

2

)

where a is the lattice constant (related to density by ρ = 4/a3). This model, amongother things, explains why the stable crystalline phase is a fcc lattice and not, forexample, the bcc body-centred cubic lattice.

39

Page 40: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

Figure 17: The six neighbours of a central sphere (the latter is not shown) depicted in lightgrey. The region where the centre of mass of the central particle could be located without everoverlapping with its neighbours is also depicted.

• Phase transition between a fluid and a crystal in hard spheres. Fig. 18shows the free energies of the fluid and two crystal structures for the hard-sphere sys-tem. The first one is obtained from the Carnahan-Starling approximation, whereasthe latter two were obtained using the free-volume approach. Note that the fcc phaseis always more stable than the bcc phase, which is metastable (actually computersimulations have proved it to be unstable against shear modes, not considered in thesimple free–volume approach). Also note that the fluid branch has been computedeven for high densities, for which the fluid no longer exists (this could be consideredas a ‘highly metastable’ fluid). The fluid and fcc curves intersect at some density.This demonstrates the fluid–to–solid transition is of first order. In order to obtainthe densities (or packing fractions) of the coexisting fluid and crystal phases, wewould have to apply a common–tangent construction, i.e. obtain ρf and ρc for fluidand crystal by solving the equations pf = pc and µf = µc simultaneously (µ is thechemical potential). Results from Monte Carlo estimations of free energies give thefollowing values for the coexistence densities: ρfσ

3 = 0.943 and ρcσ3 = 1.041, which

correspond to packing fractions ηf = 0.494 and ηc = 0.545.

FORTRAN codes and results

In this section we present complete FORTRAN codes to implement various calculationspresented before, and discuss some of the results.

• Calculation of B3 coefficient

The code used is the following:001. !

002. ! P7: Evaluation of B3 by Monte Carlo integration

003. ! hard-sphere diameter sigma=1

004. !

40

Page 41: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

0

2

4

6

8

10

12

14

16

18

20

0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70

F/V

kT

η

fcc

bcc

fluid

Figure 18: Free energy density per unit thermal energy F/V kT for the fluid phase (using theCarnahan-Starling approximation), and for two crystalline phases (fcc and bcc). For the lattertwo the free–volume approximation was used.

005. implicit real*8(a-h,o-z)

006. real*4 r,t,f

007. integer*8 M,i

008.

009. dseed=173211d0

010. pi=4.0d0*datan(1.0d0)

011. pi2=2*pi

012.

013. do mpot=2,8

014. M=10**mpot

015. sum=0

016. do i=1,M

017. call ggub (dseed, r)

018. call ggub (dseed, t)

019. call ggub (dseed, f)

020. cp=dcos(pi2*f)

021. sp=dsin(pi2*f)

022. ct=2*t-1

023. st=dsqrt(1-ct**2)

024. x=r*st*cp

025. y=r*st*sp

026. z=r*ct

027. r1a=x**2+y**2+z**2

028. call ggub (dseed, r)

41

Page 42: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

029. call ggub (dseed, t)

030. call ggub (dseed, f)

031. cp=dcos(pi2*f)

032. sp=dsin(pi2*f)

033. ct=2*t-1

034. st=dsqrt(1-ct**2)

035. x1=r*st*cp

036. y1=r*st*sp

037. z1=r*ct

038. r1b=x1**2+y1**2+z1**2

039. r2=(x-x1)**2+(y-y1)**2+(z-z1)**2

040. if(r2.lt.1.0d0) sum=sum+r1a*r1b

041. end do

042. vol=4*pi

043. B3=sum*vol**2/(3*M)

044. B3ex=5*pi**2/18

045. write(*,’(’’M=’’,i9,’’ B3=’’,f8.5,’’ B3/B3ex=’’,f8.5)’)

046. > M,B3,B3/B3ex

047. end do

048. end

The integration is done with different numbers of Monte Carlo steps, in order tovisualise the convergence. Therefore, there is a loop in the variable mpot, whichgives the number of steps in powers of 10. x, y, z are the random coordinates ofthe second sphere, which is chosen within the excluded volume of the first (placedat the origin). x1, y1, z1 are the random coordinates of the third sphere, alsochosen to always overlap with the first. In lines 039 and 040 the distance betweenthe second and third spheres is checked, and if less than one (σ is set to unity)the configuration is accepted. vol in line 042 is the volume sampled per sphere (soin total a volume vol**2 is sampled). In line 043 the third virial coefficient B3 isobtained and then compared with the exact result B3ex.

• Calculation of free volume vf

This code is prepared for the fcc lattice.001. !

002. ! P8: Free volume for the fcc lattice

003. !

004. implicit real*8(a-h,o-z)

005. dimension xx(12),yy(12),zz(12)

006. data xx/+0.0d0,+0.0d0,+0.0d0,+0.0d0,+0.5d0,-0.5d0,+0.5d0,

007. >-0.5d0,+0.5d0,-0.5d0,+0.5d0,-0.5d0/

008. data yy/+0.5d0,-0.5d0,+0.5d0,-0.5d0,+0.0d0,+0.0d0,+0.0d0,

009. >+0.0d0,+0.5d0,+0.5d0,-0.5d0,-0.5d0/

010. data zz/+0.5d0,+0.5d0,-0.5d0,-0.5d0,+0.5d0,+0.5d0,-0.5d0,

011. >-0.5d0,+0.0d0,+0.0d0,+0.0d0,+0.0d0/

012. real*4 r

42

Page 43: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

013. pi=4d0*datan(1d0)

014. sq2=dsqrt(2d0)

015.

016. rho=1.3d0

017. a=(dsqrt(2d0)/rho)**(1d0/3d0)

018. eta=rho*pi/6

019. dseed=173211d0

020. M=2e7

021. sum=0

022.

023. do i=1,M

024. call ggub (dseed, r)

025. x=a/2*(2*r-1)

026. call ggub (dseed, r)

027. y=a/2*(2*r-1)

028. call ggub (dseed, r)

029. z=a/2*(2*r-1)

030.

031. do k=1,12

032. xk=a*xx(k)*sq2

033. yk=a*yy(k)*sq2

034. zk=a*zz(k)*sq2

035. r2=(xk-x)**2+(yk-y)**2+(zk-z)**2

036. if(r2.lt.1d0) go to 1

037. end do

038. sum=sum+1

039. 1 end do

040.

041. vf=sum/M*a**3

042. write(*,’(’’eta=’’,f10.5,’’ vf=’’,f10.5,’’ ff=’’,f12.7)’)

043. >eta,vf,-rho*dlog(vf)

044.

045. end

The coordinates of the twelve neighbours of the central sphere, in the fcc lattice,are defined in vectors xx, yy and zz (lines 005–011). The coordinates are in unitsof the cubic lattice parameter. The density rho is set in line 016, and the nearest–neighbour distance a is obtained in line 017. eta is the packing fraction, and M (line020) the number of MC steps (i.e. the number of random points generated). Nowa loop is opened in line 023, and coordinates for the random point x, y and z arechosen within a cube of side a centred on the central site. Then in lines 031–037it is checked whether the generated position for the central sphere is such that anoverlap occurs. If this is the case, the point is rejected (line 036). If not, an ac-cumulator is advanced by one (line 038); this is the variable sum, which after theM steps will contain the number of points that lie within the free volume. This iscalculated in line 041, and in lines 042–043 the packing fraction, the free volumeand the associated free energy per unit volume and unit thermal energy are writ-

43

Page 44: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

ten on the screen (the thermal wavelength Λ is set to one for the sake of convenience).

• Monte Carlo code for thermal simulation of hard spheres

This FORTRAN code implements the MC simulation of thermal averages for the hard-sphere system using the importance–sampling technique. The code performs theMC dynamics, and calculates the radial distribution function from which, in a (sep-arate) final calculation, the pressure is evaluated. The results of the simulationsobtained with this code can be compared with the approximate equations of statefor fluid and crystal phases obtained in the previous sections. Before presenting thecode we will digress a little on the peculiarities of the Monte Carlo algorithm forthe system of hard spheres.

In Section 2 we presented the Metropolis et al. algorithm, and extended it tomore than one random variable. In the case of N hard spheres contained in a rect-angular box, a move consists of randomly moving a sphere and checking whetherthis displacement gives an overlap or not (Fig. 19). If there is no overlap, the moveis accepted; otherwise it is rejected. This is so because the acceptance probabilityin this case is simply

e−β∆E =

0, there is at least one overlap

1, there are no overlaps

Figure 19: The chosen sphere (in gray) is given a random displacement to a test position (ingray line). The neighbouring spheres are represented as circles in black line. In this case thedisplacement results in no overlap, and the move will be accepted.

The algorithm is then as follows:

1. We prepare an initial (non–overlapping) configuration for the N spheres withinthe rectangular box, at the density we wish to investigate (a high–density

44

Page 45: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

disordered configuration is difficult to obtain; that is why normally we willstart from a perfectly ordered configuration, even if the density is too low forthe crystal phase to be stable).

2. We choose a sphere, at random or sequentially, and displace it by a randomamount (within some distance ζ from the original position)

3. We check for overlaps with neighbouring spheres. If there is at least one overlap,the move is rejected; if there are no overlaps, the move is accepted.

4. We repeat the process until the N spheres have been visited (or N moves havebeen performed). Then we will have completed one MC step.

5. We iterate the procedure as many times as needed

As in any implementation of the Metropolis et al. algorithm, an initial warm–upperiod is required before taken averages. Also, the ratio of accepted moves to totalmoves should be chosen to be about 0.5, and this is realised by adjusting the valueof the maximum displacement ζ (clearly, if ζ is too large moves will mostly resultin overlaps, and most moves will be rejected, while if ζ is very small the moves willalmost always be accepted).

i

j j’

j’j’

j’ j’j’

j’

j’

Figure 20: Central simulation box surrounded by replicas (in the case of two–dimensional sys-tems and square boxes there are 8 replicas; for three–dimensional systems there are 27 replicas).Particles in the replicas have the same positions and velocities as in the central box; then, whenexiting one box they supersede their clon that, in turn, is exiting its box (periodic boundaryconditions). Distance between the ith and jth particles is calculated as the minimum distancebetween the ith and all the clones of the jth particles, named j ′ in the figure (minimum imageconvention). In the example, the minimum distance corresponds to one of the clones of j (theone right above the central box), not to j itself.

There are some technicalities of the method that we discuss in the following.

a. Periodic boundary conditions.When simulating extended (bulk) systems it is convenient to use periodic

45

Page 46: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

boundary conditions (Fig. 20). These boundary conditions minimise the ef-fects of surfaces on the results, and consist of placing replicas or images of thesystem next to it and along all directions. Let Lx, Ly, Lz be the sides of the(rectangular) simulation box. In practice, the method redefines the coordinatesof any particle that gets outside of the box; for example, if at some instant oftime the x coordinate of a particle is x > Lx, then we make x − Lx → x. Ifx < 0, then we redefine x+Lx → x. This is done on every coordinate of everyparticle along the simulation.

b. Minimum image convention.For pair–wise interaction potentials, we have to calculate the distance betweenthe two particles of a given pair. For rectangular simulation boxes the sim-ulation box has 26 replicas. This means that, when calculating the distancebetween the i–th and the j–th particles, one has decide which of the 27 possibledistances between the i–th particle and the j–th particles, including itself andits 26 replicas, has to be taken into account, since all or some of them can bewithin the interaction range. The minimum image convention chooses the onewhich gives the least distance |rj − ri| (Fig. 20). See the code for a possiblealgorithm that implements this.

c. Calculation of radial distribution function.We focus on a particular sphere. Then g(r)d3r is the number of particles to befound in a differential volume element d3r a distance r from that sphere, withrespect to the number that would obtain should interactions between particlesbe turned off (or equivalently, with respect to the corresponding number thatwould obtain very far from the sphere, where interactions –and for that mattercorrelations– are zero or negligible). In practice, given the expected sphericalsymmetry of the problem, we consider a spherical differential volume elementof width ∆r around a particular particle i (see Fig. 21). Put in a mathematicalexpression:

g(r) =1

⟨N∑

i=1

ni (r,∆r)

4πr2ρ∆r

The average 〈...〉 is over different configurations of the spheres. Note thatan average over particles is also explicit in the expression (sum over particles,i = 1, 2, ..., N) and division by N ; this can (and should) be done, as all particlesare equivalent and it helps improve the accuracy. ni(r,∆r) is the number ofparticles within a spherical shell of width ∆r, centred on the i–th particle, andat a distance r from this particle. This can be easily computed for any givenconfigurations of sphere. Finally, 4πr2∆r is the volume of such a sphericalshell, which can also be approximated by

3

[(r +

∆r

2

)3

−(r − ∆r

2

)3]

When multiplied by ρ, we get the number of particles within the volume atvery large distance (rigorously speaking, we have to multiply by (N − 1)/V ,not by ρ = N/V , since the central particle cannot be counted). Then, for

46

Page 47: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

large r, the ratio should tend to unity. In the simulation a number of discretedistances ri is sampled, and g(r) is constructed as a normalized histogram.

i

r

r∆

Figure 21: The radial distribution function at distance r is calculated by counting the numberof particles in a spherical shell of radius r (here represented by a ring) and thickness ∆r whenthe origin is at the ith particle, giving ni(r,∆r). This is done for all the N particles and thenaveraged (dividing by N). The resulting quantity is normalised to the number of particlesexpected in the shell if there were no interactions, i.e. 4πr2∆rρ.

We now present the Monte Carlo code for hard spheres. The code has a main bodyof code and a few subroutines. Subroutine ggub was explained previously so it isnot included here.

001. !

002. ! P9: MC simulation of hard spheres

003. !

004. implicit real*8(a-h,o-z)

005. parameter (num=4000, ngrx=1000)

006.

007. real*4 rr

008. dimension r1x(num), r1y(num), r1z(num)

009.

010. common gr(ngrx)

011.

012. data dseed /1467383.d0/

013. data hr /0.05d0/

014. data pi /3.141592654d0/

015.

016. open (1, file = ’hs.par’)

017. read (1,*) ini, lmn

018. read (1,*) nstepeq, nstep0, nblock0

019. read (1,*) dens, amp

47

Page 48: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

020. close (1)

021.

022. if (lmn .gt. num) stop ’lmn .gt. num’

023.

024. open (2, file = ’hs.inp’)

025. open (3, file = ’hs.out’)

026. open (4, file = ’hs.gr’)

027.

028. if (ini .eq. 0) then

029. call fcc (lmn, dens, r1x, r1y, r1z, xlx, yly, zlz)

030. else

031. do i = 1, lmn

032. read(2,*) r1x(i), r1y(i), r1z(i)

033. end do

034. read(2,*) xlx,yly,zlz

035. end if

036.

037. xll = xlx/2

038. Yll = yly/2

039. zll = zlz/2

040.

041. volume = xlx*yly*zlz

042. ro = lmn/volume

043.

044. nox = xll/hr

045. if (nox .gt. ngrx) stop ’nox .gt. ngrx’

046.

047. do istage = 1, 2

048.

049. if (istage .eq. 1) then

050. nblock = nstepeq

051. nstep = nblock

052. else

053. nblock = nblock0

054. nstep = nstep0

055. end if

056.

057. do j = 1, lmn

058. call pbc (r1x(j),r1y(j),r1z(j),xlx,yly,zlz)

059. end do

060.

061. do j = 1, ngrx

062. gr(j) = 0

063. end do

064.

065. naccept = 0

066. numgr = 0

48

Page 49: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

067.

068. do ii = 1, nstep/nblock

069.

070. numgr = numgr + 1

071. call gdr (lmn,r1x,r1y,r1z,xlx,yly,zlz)

072.

073. nacceptb=0

074.

075. do jj=1,nblock

076.

077. do j = 1, lmn

078. call pbc

079. > (r1x(j),r1y(j),r1z(j),xlx,yly,zlz)

080. end do

081.

082. do k = 1, lmn

083. call ggub(dseed, rr)

084. desx = (rr-0.5)*amp

085. call ggub(dseed, rr)

086. desy = (rr-0.5)*amp

087. call ggub(dseed, rr)

088. desz = (rr-0.5)*amp

089. r1xk = r1x(k)

090. r1yk = r1y(k)

091. r1zk = r1z(k)

092. r3x = r1xk + desx

093. r3y = r1yk + desy

094. r3z = r1zk + desz

095.

096. do j = 1, lmn

097. if (j .ne. k) then

098. xi = r3x

099. yi = r3y

100. zi = r3z

101. xx = xi - r1x(j)

102. yy = yi - r1y(j)

103. zz = zi - r1z(j)

104. call mic (xx,yy,zz,xlx,yly,zlz)

105. r = xx*xx + yy*yy + zz*zz

106. if (r .lt. 1d0) go to 1

107. end if

108. end do

109.

110. naccept = naccept + 1

111. nacceptb = nacceptb + 1

112. r1x(k) = r3x

113. r1y(k) = r3y

49

Page 50: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

114. r1z(k) = r3z

115.

116. 1 end do

117. end do

118.

119. acceptratio = real(nacceptb)/(nblock*lmn)*100

120. if (acceptratio .gt. 50.0) amp = amp * 1.05

121. if (acceptratio .lt. 50.0) amp = amp * 0.95

122. end do

123.

124. do i=1,lmn

125. write(3,*) r1x(i),r1y(i),r1z(i)

126. end do

127. write(3,*) xlx,yly,zlz

128.

129. write(8,’(’’% Mov acept =’’,f10.6)’) naccept*100./(nstep*lmn)

130.

131. if (istage .eq. 2) then

132.

133. do i = 1, ngrx

134. r = (i-1+0.5d0) * hr

135. dvol = 4*pi/3d0 * ((r+hr/2)**3-(r-hr/2)**3)

136. grr = gr(i)/(dvol*lmn*numgr*(lmn-1)/volume)

137. write(4,*) r,grr

138. end do

139.

140. end if

141.

142. end do

143.

144. end

145. !

146. ! Periodic boundary conditions

147. !

148. subroutine pbc (r1x,r1y,r1z,xlx,yly,zlz)

149.

150. implicit real*8 (a-h,o-z)

151.

152. r1x=r1x-xlx*dnint(r1x/xlx)

153. r1y=r1y-yly*dnint(r1y/yly)

154. r1z=r1z-zlz*dnint(r1z/zlz)

155.

156. end

157. !

158. ! Minimum image convention

159. !

160. subroutine mic (xx,yy,zz,xlx,yly,zlz)

50

Page 51: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

161.

162. implicit real*8(a-h,o-z)

163.

164. xx=xx-xlx*dnint(xx/xlx)

165. yy=yy-yly*dnint(yy/yly)

166. zz=zz-zlz*dnint(zz/zlz)

167. end

168. !

169. ! Generation of fcc lattice

170. !

171. subroutine fcc (lmn, dens, r1x, r1y, r1z, xlx, yly, zlz)

172.

173. implicit real*8 (a-h, o-z)

174. parameter (num = 4000)

175.

176. dimension r1x(num), r1y(num), r1z(num)

177. dimension sx(4), sy(4), sz(4)

178.

179. data sx /0d0, 0.5d0, 0.5d0, 0d0/

180. data sy /0d0, 0.5d0, 0d0, 0.5d0/

181. data sz /0d0, 0d0, 0.5d0, 0.5d0/

182.

183. data sh /0.01d0/

184.

185. a = (4d0/dens)**(1d0/3d0)

186. n = (lmn/4)**(1d0/3d0) + 0.001

187.

188. xlx = n*a

189. yly = n*a

190. zlz = n*a

191.

192. m = 1

193. do i = 1, n

194. do j = 1, n

195. do k = 1, n

196. do l = 1, 4

197. r1x(m) = (i - 1 + sx(l) + sh) * a - xlx/2

198. r1y(m) = (j - 1 + sy(l) + sh) * a - yly/2

199. r1z(m) = (k - 1 + sz(l) + sh) * a - zlz/2

200. m=m+1

201. end do

202. end do

203. end do

204. end do

205.

206. end

207. !

51

Page 52: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

208. ! Histogram for g(r)

209. !

210. subroutine gdr(lmn,r1x,r1y,r1z,xlx,yly,zlz)

211.

212. implicit real*8(A-H,O-Z)

213. parameter (num=4000, ngrx=1000)

214.

215. dimension r1x(num), r1y(num), r1z(num)

216. common gr(ngrx)

217. data hr /0.05d0/

218.

219. do i = 1, lmn-1

220. do j = i + 1,lmn

221. xx = r1x(i) - r1x(j)

222. yy = r1y(i) - r1y(j)

223. zz = r1z(i) - r1z(j)

224. call mic (xx,yy,zz,xlx,yly,zlz)

225. r = xx*xx + yy*yy + zz*zz

226. rr = dsqrt(r)

227. if (rr .lt. xlx/2) then

228. k = rr/hr + 1

229. gr(k) = gr(k) + 2

230. end if

231. end do

232. end do

233.

234. end

The necessary input parameters are read in from file hs.par. Here, ini is a variablesuch that a fcc lattice is built up when ini=0; otherwise the initial configurationsare read in from file hs.inp. The number of particles is lmn (this must be of theform 4n3, with n an integer, to conform with fcc symmetry). The number of stepsin the equilibration stage is nstepeq, and the number of averaging steps are innstep0. These steps are divided into blocks of size nblock0 (averages are accu-mulated every nblock0 steps). In lines 028–035 the initial configuration is set up.The two stages, equilibration and averaging, are controlled by the do–loop in line047. Periodic boundary conditions are applied at the beginning (lines 057–059),and accumulators set to zero (lines 061–066). Blocks are opened in line 068, andsub–blocks in line 075. Trial moves are applied sequentially (line 082), by randomlychoosing a displacement vector with amplitude amp along all three directions, inlines 083–094 (this variable is set in file hs.par). Lines 096–108 perform the overlaptest. If positive, a new particle move is tested (lines 106 and 116); otherwise thecoordinates of the particle tested are updated (lines 112–114). In lines 119–121 theamplitude amp for the maximum displacement is changed according to whether theacceptance ratio in the last block of steps was different from 50%. In line 122 theloop on steps is closed, and particle positions are dumped to file hs.out. In casewe are in the averaging stage, the histogram for the radial distribution function is

52

Page 53: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

normalised and written on file (lines 133–138). After this the necessary subroutinesare defined: pbc (periodic boundary conditions), mic (minimum image convention),fcc (generation of fcc lattice), and gdr (calculation of histogram for g(r)).

The main result from this code is the calculation of the radial distribution func-tion. Fig. 22 collects six radial distribution functions obtained, with this code,for six different densities. As the density is increased the oscillatory structure ofthe function becomes more apparent, and between ρσ3 = 0.9 and 1.0 the functiondevelops peaks located below r = σ, which is a clear signature of angular order andhence of crystalline order.

53

Page 54: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

0

1

2

3

4

5

6

1 1.5 2 2.5 3

0

1

2

3

4

5

6

1 1.5 2 2.5 3

0

1

2

3

4

5

6

1 1.5 2 2.5 3

0

1

2

3

4

5

6

1 1.5 2 2.5 3

0

1

2

3

4

5

6

1 1.5 2 2.5 3

0

1

2

3

4

5

6

1 1.5 2 2.5 3

rr

ρσ3 =1.1

ρσ3 =1.0

ρσ3 =0.9ρσ3 =0.6

ρσ3 =0.7

ρσ3 =0.8

Figure 22: Radial distribution functions g(r) of the hard–sphere system for six different reduceddensities ρσ3 (indicated in each panel).

54

Page 55: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

3. THE MOLECULAR DYNAMICS METHOD

3.a Introduction

In this method the equations of motion of a system of N interacting particles are solvednumerically. Depending on the imposed constrainst, 3N or less equations will have to besolved, one for each degree of freedom. Suppose there are 3N of such degrees, taken hereto be the positions ri, i = 1, ..., N . The central quantity is the potential energy,

U = U({ri}) = U(r1, r2, ..., rN).

The equations of motion are:

dridt

= vi

mdvidt

= −∇iU({rj})

where vi is the velocity of the i–th particle, and m is the mass of the particles. Since thedynamics is energy–conserving, the evolution of the system proceeds as if the state pointmoved on the hypersurface defined by H({ri,vi}) =const. in the 3N–dimensional phasespace.

3.b Methods of integration

The numerical solution of the equations of motion is performed with an algorithm basedon finite differences. Time is discretised using a time interval h. Knowledge of the posi-tions and velocities of all particles at time t allows calculation of the forces at time t, Fi,on all particles, and the problem is how to obtain the positions and velocities at a latertime t + h. The simplest algorithm is the Verlet algorithm, which we now derive.

• Verlet algorithm. We write the following Taylor expansions:

ri(t + h) = ri(t) + hvi(t) +h2

2mFi(t) +

h3

6vi(t) + ...

ri(t− h) = ri(t)− hvi(t) +h2

2mFi(t)−

h3

6vi(t) + ...

Adding, neglecting terms of order O(h4), and reshuffling,

ri(t + h) = 2ri(t)− ri(t− h) +h2

mFi(t)

This recurrence formula allows to calculate the new position if one knows the po-sitions ri(t) and ri(t − h). It is an O(h3) algorithm [i.e. the new positions containerrors of O(h4)].

This version of the algorithm is the Newtonian version. The velocities are notrequired to calculate the new positions, but of course they are needed in case we

55

Page 56: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

wanted to calculate the kinetic energy, related to an important quantity such as thetemperature. Velocities can be approximated with the expansion

ri(t+ h) = ri(t− h) + 2hvi(t) +O(h3)

and from here

vi(t) =ri(t + h)− ri(t− h)

2h

which contains errors of O(h3). The kinetic energy, at time t, can be estimated from

Ec(t) =N∑

i=1

1

2m |vi(t)|2

and the temperature, using the equipartition theorem, is

⟨1

2mv2

α

⟩=kT

2→ T =

⟨N∑

i=1

m |vi(t)|2kNf

where α denotes an arbitrary degree of freedom, and Nf is the number of degreesof freedom of the system (Nf = 3N if no contraints are imposed; in simulationsit is quite common to fix the position of the centre of mass, so we would haveNf = 3N − 3).

• leap-frog version. This version is numerically more stable than Verlet’s. One firstdefines:

vi

(t− h

2

)=

ri(t)− ri(t− h)

h, vi

(t+

h

2

)=

ri(t+ h)− ri(t)

h.

Positions are then updated with

ri(t + h) = ri(t) + hvi

(t+

h

2

)

Using the Verlet algorithm,

ri(t + h)− ri(t) = ri(t)− ri(t− h) +h2

mFi(t)

so that

vi

(t+

h

2

)= vi

(t− h

2

)+h

mFi(t).

From here the Hamiltonian version of the Verlet algorithm follows:

ri(t+ h) = ri(t) + hvi

(t+

h

2

),

vi

(t+

h

2

)= vi

(t− h

2

)+h

mFi(t).

56

Page 57: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

Note that the positions and velocities are out–of–phase by a time interval h/2.Velocities at t can be calculated from the expression

vi(t) =

vi

(t+

h

2

)+ vi

(t− h

2

)

2. (11)

• Another version. The problem with the latter version is that ri(t) and vi(t) arenot obtained at the same order in h at the same time t. A possible modificationthat avoids this is

ri(t+ h) = ri(t) + hvi

(t+

h

2

),

vi(t + h) = vi(t) +h

2m[Fi(t+ h) + Fi(t)] .

(12)

This numerical scheme is completely equivalent to Verlet’s. To check it out, notethat

ri(t+ 2h) = ri(t + h) + vi(t+ h)h +h2

2mFi(t + h),

ri(t) = ri(t+ h)− vi(t+ h)h− h2

2mFi(t).

Adding,

ri(t + 2h) + ri(t) = 2ri(t+ h) + h [vi(t + h)− vi(t)] +h2

2m[Fi(t + h)− Fi(t)] .

Using the second of Eqns. (12), we obtain

ri(t+ h) = 2ri(t)− ri(t− h) +h2

mFi(t),

which is precisely Verlet’s algorithm.

3.c Stability of trajectories

Systems with many degrees of freedom are prone to being unstable in the following sense.Take two possible trajectories in phase phase that start from very close initial conditions.Let us calculate the distance δ(t) between these two trajectories as if phase space wereEuclidean. Expressing the distance in terms of an exponential function (Fig. 23) asδ(t) ∼ eλt, we say that the system is

• STABLE if λ < 0 or δ decreases more slowly than exponentially

• UNSTABLE if λ > 0; now the system is said to be chaotic

The coefficient λ is known as Lyapunov exponent. The criterion for the separation be-tween trajectories based on an exponential comes naturally from a linear analysis of theeffect a perturbation of amplitude δ would have on a given trajectory; the linear equation

57

Page 58: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

δ ~ eλ t

PHASE

SPACE

Figure 23: Exponential increase of the separation between two trajectories that started veryclose in phase space.

would be δ ∼ δ, which gives an exponential as a solution.

The Lyapunov instability occurs when λ > 0; there are two reasons why it is impor-tant:

• gives a limiting upper time beyond which an accurate and trustable numerical so-lution can be found

• in order to reach a high precision when computing a trajectory after a time t, wewould need to know the initial condition with an unreasonably accuracy, since thenumber of digits grows linearly with time: if ε is the number of digits,

eλt ∼ 10−ε → ε ∼ λt

log 10

It turns out that systems with many degrees of freedom, such as those encounteredin condensed–matter physics, are intrinsically unstable, and hence intrinsically chaotic.Which are the practical consequences of all this? Obviously use of high–precision numer-ical algorithms to integrate the equations of motion with high accuracy is not only costlyfrom the computational point of view but also useless in practical terms.

The basic properties that a numerical integration scheme must meet in order to be usefulin condensed–matter physics are:

1. It should be time–reversible

Reversibility is a basic property of the microscopic equations of motion:

dridt

= vi

mdvidt

= −∇iU

58

Page 59: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

If we make the transformation

t→ −tvi → −vi

(i.e. the arrow of time is reversed and, at the same time, the sign of the velocitiesis reversed, the equations remain unchanged. In Newtonian language,

md2ridt2

= −∇iU

reversibility with respect to the transformation t → −t is also obvious. It is easyto see that the Verlet algorithm respects the property of invariance with respect tothe operation h→ −h:

ri(t+ h) = 2ri(t)− ri(t− h) +h2

mFi(t)

−→ ri(t− h) = 2ri(t)− ri(t + h) +h2

mFi(t)

If terms are rearranged in the second equation we obtain the first. The built-inirreversibility of some integration algorithms induces an ‘intrinsic energy dissipation’and non–energy–conserving dynamics. This problem may give rise to unwanted andsometimes even spurious results in the simulations. But there is still the problemof the irreversibility introduced by the round–off errors in the computer due to useof floating–point arithmetics, which necessarily involves a finite number of decimaldigits. These effects are usually negligible.

2. It should be symplectic

This means the following. Let f(q, p, t) be the probability distribution of the systemso that

f(q, p, t)dqdp

is the number of points (configurations) in phase space contained in a differentialvolume dqdp centred at (q, p) at time t. Each point propagates in time accordingto the dynamical equations (e.g. Hamilton’s equations). The Liouville’ theoremsays that the probability distribution function f(q, p, t) evolves in time like an in-compressible fluid, which in mathematical terms means that f = 0, where the dotimplies total time derivative. Fig. 24 represents pictorially this flow. This propertycan be shown to be contained in the dynamical equations, and of course has to berespected by the numerical integration scheme used.

A symplectic numerical scheme respects Liouville’s theorem, i.e. it conservs phasespace volume. If the numerical algorithm is written in matrix form, this means thatthe Jacobian transformation that takes the system from time t to time t+ h has tobe equal to unity:

(r(t+ h)v(t+ h)

)= M

(r(t)v(t)

), Jac(M) = 1

59

Page 60: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

PHASE

SPACE

Figure 24: Liouville theorem: volume of phase space is a constant in time and therefore behavesas an incompressible fluid.

where M is the dynamical matrix associated with the particular numerical algo-rithm. For instance, the Verlet algorithm, written in Hamiltonian form,

ri(t+ h) = ri(t) + hvi

(t +

h

2

)

vi

(t +

h

2

)= vi

(t− h

2

)+h

mFi(t)

can be written, with the help of the vectors

(x1, y1, z1, x2, y2, z2, ..., xN , yN , zN , vx1 , v

y1 , v

z1, v

x2 , v

y2 , v

z2, ..., v

xN , v

yN , v

zN)

in terms of the matrix

M =

(1 h.10 1

),

the determinant of which is obviously equal to unity.

3.d. The Lennard–Jones potential: practical implementation

The Lennard–Jones potential, φLJ(r), is a pair potential that depends on the distancer between two particles. Its functional form is

φLJ(r) = 4ε

[(σ

r

)12

−(σ

r

)6]

and it contains two parameters: ε, an energy parameter which is equal to the depth of thepotential well (see Fig. 25), and a length parameter, σ, which is the root of the potentialand can be taken roughly as the diameter of the particles. The potential has a repulsivepart at short distances, which accounts for the repulsion felt by two spherically-symmetric

60

Page 61: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

atoms at close distance due to the overlap of electronic clouds and to the Pauli exclusionprinciple. At long distances the potential is attractive, due to van der Waals interaction.At intermediate distances, there is a potential well.

-1

0

1

2

0 1 2 3r

φ(r)

Figure 25: The Lennard–Jones pair potential.

We will illustrate the Molecular Dynamics technique using this model potential. Theinterest in the Lennard–Jones potential is due to the fact that a system of particles inter-acting via this pair potential presents a ‘realistic’ phase diagram, containing liquid–vapourand fluid–solid transitions. The first is a due to the presence of attractions in the poten-tial. Note that the hard–sphere model does not contain this feature.

The central problem of Molecular Dynamics is the calculation of forces. In our casethe forces can be calculated analytically by differentiation of the potential. Let N be thenumber of particles in the system. The potential energy of the system is:

U(r1, r2, ..., rN) =1

2

N∑

i=1

N∑

j=1

j 6= i

φLJ (|rj − ri|) =N∑

i=1

N∑

j<i

φLJ (|rj − ri|)

The first version includes a 1/2 prefactor in order not to count the same pair of particlestwice. The force on particle i is:

Fi = −∇riU = −∑

j 6=i∇riφLJ (|rj − ri|) ≡

j 6=iFij

Note that Fij = −∇riφLJ (|rj − ri|) is the contribution of particle j to the force on particlei, and that, by Newton’s third law, Fij = −Fji; this property helps save valuable computertime. Now:

∇riφLJ (|rj − ri|) = φ′LJ

(|rj − ri|)∇ri |rj − ri| = φ′LJ

(|rj − ri|)rj − ri|rj − ri|

61

Page 62: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

0

0.5

1

1.5

2

2.5

3

-7

-6.5

-6

-5.5

-5

-4.5

-5

-4.5

-4

-3.5

-3

-2.5

-3.37

-3.36

-3.35

0 50 100 150 200 250 300

MD steps

Etot

Etot

Epot

Ekin

Figure 26: Time evolution of (from top to bottom): kinetic, potential, total energy (all shownwith the same vertical scale) and total energy (at a much lower scale) from a MD simulation of aLennard–Jones liquid at reduced density ρσ3 = 0.8 and reduced (initial) temperature kT/ε = 2.

Since

φ′LJ(r) = −48ε

σ2

[(σ

r

)13

− 1

2

r

)7]

we finally have

Fi =48ε

σ2

j 6=i

rij

)14

− 1

2

rij

)8 (rj − ri) .

Once the forces are known, the dynamics can be calculated by means of the numericalintegration algorithm.

Quantities can be calculated as time averages (instead as configuration averages, likein the MC method). For example, the pressure can be calculated from the virial theoremwhich, for pair–wise additive forces, reads:

p = ρkT − 1

3V

⟨∑

i

j>i

rij · Fij

t

,

where the brackets indicate time average. The dot product is:

i

j>i

rij · Fij = −φ′LJ

(|rj − ri|)rij · rij|rj − ri|

=48εrijσ2

rij

)13

−(σ

rij

)7

62

Page 63: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

The temperature is computed, as discussed before, with

T =

⟨N∑

i=1

m |vi(t)|2kNf

t

,

and the potential energy is

U =

⟨N∑

i=1

N∑

j<i

φLJ (|rj − ri|)⟩

t

.

The total energy,

E =

⟨N∑

i=1

1

2m |vi(t)|2

t

+

⟨N∑

i=1

N∑

j<i

φLJ (|rj(t)− ri(t)|)⟩

t

,

is useful to check that the MD code is free from errors, since it has to be constant duringthe simulation. If the leap–frog algorithm is used, on has to be careful to use the velocitiesat time t, i.e. at the same time as the positions, using Eqn. (11).

Let us now discuss the question of units. Natural units of energy and length are ε and σ,respectively, and from these a LJ time scale is τ = (mσ2/ε)1/2. Therefore dimensionlesspositions, velocities and forces are:

r∗i =riσ, v∗i =

viσ/τ

= vi

(m

ε

)1/2

, F∗i =Fi

ε/σ=

Fiσ

ε.

The equation that updates positions in the leap–frog algorithm becomes:

r∗i (t + h)σ = r∗i (t)σ + h(ε

m

)1/2

v∗i

(t+

h

2

)→ r∗i (t+ h) = r∗i (t) + h∗v∗i

(t+

h

2

),

where h∗ = h/τ . The equation for velocities is:

v∗i

(t+

h

2

)(ε

m

)1/2

= v∗i

(t− h

2

)(ε

m

)1/2

+h∗

m

(mσ2

ε

)1/2 (ε

σ

)F∗i (t)

so that

v∗i

(t +

h

2

)= v∗i

(t− h

2

)+ h∗F∗i (t).

The dimensionless temperature is T ∗ = kT/ε:

T ∗ =k

ε× 1

kNf

⟨N∑

i=1

m

∣∣∣∣∣

m

)1/2

v∗i (t)

∣∣∣∣∣

2⟩

t

=1

Nf

⟨N∑

i=1

|v∗i (t)|2⟩

t

.

Note that, since all particles are equivalent,

T ∗ =1

3N×N

⟨|v∗(t)|2

⟩t→

⟨|v∗(t)|2

⟩= 3T ∗ (13)

63

Page 64: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

0

0.5

1

1.5

2

2.5

3

0 50 100 150 200 250 300

MD steps

kT/ε

(red

uced

tem

pera

ture

)

Figure 27: Time evolution of temperature for the same experiment as in Fig. 26.

for a generic particle (here we took Nf = 3N). Dimensionless kinetic and potentialenergies are E∗c = Ec/ε,

E∗c =1

ε× 1

2m

⟨N∑

i=1

∣∣∣∣∣

m

)1/2

v∗i (t)

∣∣∣∣∣

2⟩

t

=1

2

⟨N∑

i=1

|v∗i (t)|2⟩

t

,

and U∗ = U/ε. The dimensionless density is ρ∗ = ρσ3, and the dimensionless pressurep∗ = pσ3/ε is calculated from the virial theorem:

p∗ε

σ3=

(ρ∗

σ3

)× k ×

(T ∗ε

k

)− 1

3V ∗σ3

⟨∑

i

j>i

(σr∗ij

)·(ε

σF∗ij

)⟩

t

,

so that

p∗ = ρ∗T ∗ − 1

3V ∗

⟨∑

i

j>i

r∗ij · F∗ij⟩

t

.

FORTRAN code

Only the main body of code is given, since all the necessary subroutines (implementationof periodic boundary conditions, minimum image convention, calculation of histogramfor g(r), and initial configuration for positions) are the same as those used in the MonteCarlo code for hard spheres.

001. !

002. ! P10: MD for Lennard-Jones particles

003. !

64

Page 65: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

004. implicit real*8(a-h,o-z)

005. parameter (n=256, ngrx=1000)

006. dimension x(n),y(n),z(n)

007. dimension vx(n),vy(n),vz(n)

008. dimension fx(n),fy(n),fz(n)

009. dimension ig(1000)

010. real*4 r

011. common gr(ngrx)

012.

013. rho=0.8

014. T=2

015. npasos=2000

016.

017. dum = 17367d0

018. pi = 4d0 * datan(1d0)

019. call fcc (n, rho, x, y, z, aL, aL, aL)

020. do i=1,n

021. vr = dsqrt(3*T)

022. call ggub(dum,r)

023. cost = 2*r-1

024. sint = dsqrt(1-cost**2)

025. call ggub(dum,r)

026. fi = r*2*pi

027. vx(i) = vr*sint*dcos(fi)

028. vy(i) = vr*sint*dsin(fi)

029. vz(i) = vr*cost

030. end do

031.

032. aL2 = aL/2d0

033. cut2 = (2.5d0)**2

034. cutr2 = (aL/2)**2

035. ngr = 100

036. hgr = aL2/ngr

037. dt = 0.01d0

038.

039. ec = 0

040. u = 0

041. ap = 0

042.

043. do i = 1,1000

044. ig(i) = 0

045. end do

046.

047. do k = 1,npasos

048.

049. do i = 1, n

050. call pbc (x,y,z,aL,aL,aL)

65

Page 66: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

051. end do

052. do i = 1, n

053. fx(i) = 0

054. fy(i) = 0

055. fz(i) = 0

056. end do

057. epot=0

058. do i = 1, n-1

059. xi = x(i)

060. yi = y(i)

061. zi = z(i)

062. do j = i+1, n

063. xx = xi-x(j)

064. yy = yi-y(j)

065. zz = zi-z(j)

066. call mic (xx,yy,zz,aL,aL,aL)

067. r2 = xx**2+yy**2+zz**2

068. if (r2 .lt. cut2) then

069. r1 = 1/r2

070. r6 = r1**3

071. pot=4*r6*(r6-1)

072. u = u+pot

073. epot=epot+pot

074. rr = 48*r6*r1*(r6-0.5d0)

075. fxx = rr*xx

076. fyy = rr*yy

077. fzz = rr*zz

078. ap = ap+rr*r2

079. fx(i) = fx(i)+fxx

080. fy(i) = fy(i)+fyy

081. fz(i) = fz(i)+fzz

082. fx(j) = fx(j)-fxx

083. fy(j) = fy(j)-fyy

084. fz(j) = fz(j)-fzz

085. end if

086. end do

087. end do

088.

089. ekin=0

090. do i=1,n

091. vxi = vx(i)+dt*fx(i)

092. vyi = vy(i)+dt*fy(i)

093. vzi = vz(i)+dt*fz(i)

094. vxx = 0.5d0*(vxi+vx(i))

095. vyy = 0.5d0*(vyi+vy(i))

096. vzz = 0.5d0*(vzi+vz(i))

097. en = vxx**2+vyy**2+vzz**2

66

Page 67: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

098. ekin = ekin+en

099. ec = ec+en

100. vx(i) = vxi

101. vy(i) = vyi

102. vz(i) = vzi

103. x(i) = x(i)+dt*vx(i)

104. y(i) = y(i)+dt*vy(i)

105. z(i) = z(i)+dt*vz(i)

106. end do

107.

108. call gdr(x,y,z,aL,aL,aL)

109.

110. end do

111.

112. temp =ec/(3*n*npasos)

113. u = u/(n*npasos)

114. ap = rho*temp+ap/(3*aL**3*npasos)

115.

116. write(*,*) ’temperature=’,temp

117. write(*,*) ’energy=’,u

118. write(*,*) ’pressure=’,ap

119.

120. do k=1,ngr

121. r=(k-1)*hgr+hgr/2d0

122. vol=4*pi/3*((r+hgr/2d0)**3-(r-hgr/2d0)**3)

123. gdr=gr(k)/(n*(n-1)/aL**3*npasos*vol)

124. write(1,*) r,gdr

125. end do

126.

127. end

The code is quite similar to that of a Monte Carlo simulation, except that one has totake account of the velocities and update coordinates and velocities according to theVerlet algorithm (in the leap-frog version). Coordinates are initialised in line 019, usingthe same subroutine as in the Monte Carlo code for hard spheres (which generates afcc lattice). Here we have to generate values for the initial velocities. This is done byassigning random directions for the velocity vectors with a constant modulus obtainedfrom the initial temperature [vr, line 021; this relation is obtained from Eqn. (13)], inlines 020-030. The time interval h, called dt in the code, is set to 0.01 in line 037. Thehistogram for the radial distribution function is stored in variable ig. In line 047 the loopover MD steps is opened. The forces on the particles, the potential energy and the virialare computed in lines 058-087. Particle coordinates and velocities are updated in lines090-106, and then the histogram for g(r) is calculated for the current MD step. Finally,the average temperature temp, total energy u and pressure ap (this is obtained from thevirial) are calculated and printed out, and the radial distribution function is normalisedand dumped to file fort.1.

67

Page 68: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

Let us discuss some results generated by this code. In Fig. 26 kinetic, potential andtotal energies are represented as a function of MD steps, i.e. simulation time (with atime step of h = 0.01 in reduced units τ). This corresponds to a simulation at fluidconditions, but starting with a crystalline configuration with random velocity vectors ofthe same moduli. We can see how both kinetic and potential contributions fluctuate quitesubstantially, while the total energy remains constant (except for some initial time wherethe system readjusts itself due to the quite atypical configuration that we started from).Note that the total energy does fluctuate, but with a much lower amplitude than thekinetic and potential contributions separately (see bottom panel). We can conclude thatthe Verlet algorithm is energy–conserving and works well.

Fig. 27 show the evolution of the temperature; this is the same simulation as before,and the time behaviour of the temperature follows that of the kinetic energy (since theyare proportional). Note that the initial temperature of kT/ε = 2 decreases down to anaverage of 1.12 along the run, as a result of the internal readjustments of the differentcontributions to the energy.

In Fig. 28 the fluctuating part of the pressure (without the ideal–gas contribution ρkT ),i.e. the virial, is plotted against MD time. The horizontal line is the average value over2000 MD steps. We can see that the virial is a highly fluctuating quantity; therefore, inorder to have a reliable value for the pressure, sufficiently long simulations have to be run.

0

0.5

1

1.5

2

2.5

0 50 100 150 200 250 300MD steps

redu

ced

viria

l

Figure 28: Time evolution of the virial part of the pressure for the same experiment as in Fig.26. The horizontal line is the average value over 2000 MD steps.

Finally, in Fig. 29, the radial distribution function is represented for the same conditions;in this case an initial period of 1000 MD steps for equilibration has been run, to give thesystem time to reach typical (fluid, i.e. globally disordered) configurations. The average

68

Page 69: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

time interval was 2000 MD steps. It has the usual features of the radial distributionfunction of a fluid phase: a first pronounced peak and oscillatory damped behaviour atlarger distances.

0

0.5

1

1.5

2

2.5

0 0.5 1 1.5 2 2.5 3 3.5

g(r)

r/σ

Figure 29: Radial distribution function for a Lennard–Jones fluid at conditions ρσ3 = 0.8 andkT/ε = 1.12.

69

Page 70: M´ETODOS COMPUTACIONALES EN F´ıSICA DE LA MATERIA

LIST OF COMPUTER CODES

P1. Random number generator (page ???)P2. Metropolis et al. algorithm for f(x) = c exp (−x∗∗4)P3. Random walkP4. Algorithm for the SARW modelP5. Clustering algorithm (in the course web page)P6. 2D-Ising model Monte Carlo with importance samplingP7. Evaluation of B3 by Monte Carlo integrationP8. Free volume for the fcc latticeP9. MC simulation of hard spheresP10. MD for Lennard–Jones particles

70