160
1 Computational Statistical Physics FS 2017 402-0812-00L Friday 10.45 – 12.30 in HIT H51 Exercices: Friday 8.45- 10.30 in HIT F21 Oral exam www.ifb.ethz.ch/education/msc-courses/ msc-computational-statphys.html

Computational Statistical Physics - ETH Z · PDF file1 Computational Statistical Physics . FS 2017 . 402-0812-00L . Friday 10.45 – 12.30 in HIT H51 . Exercices: Friday 8.45- 10.30

Embed Size (px)

Citation preview

1

Computational Statistical Physics

FS 2017

402-0812-00L

Friday 10.45 – 12.30 in HIT H51 Exercices: Friday 8.45- 10.30 in HIT F21

Oral exam

www.ifb.ethz.ch/education/msc-courses/ msc-computational-statphys.html

2

Studiengänge

• Mathematics, Computer Science (Bachelor) • Mathematics, Computer Science (Master) • Physics (Wahlfach) • Material Science (Spezialvorlesung, Master) • Civil Engineering (Spezialvorlesung)

3

Where do you find us? Computational Physics for Building

Materials (IfB)

Mirko Lukovic

Miller Mendoza

Malte Henkel

HIF E 28.2 HIT G 23.5 HIF E 23.1

4

Computational Physics II Computational Quantum Physics 402-0810 G. Carleo Tuesday morning: V Di 10-12, U Di 13-15

Computational Statistical Physics 402-0812 M. Lukovic, M. Mendoza Jimenez and M. Henkel Friday morning: V Fr 11-13, U Fr 9-11

Molecular and Materials Modelling 327-5102 D. Passerone and C. Pignedoli Friday afternoon: V Fr 14-16, U Fr 16-18

5

Computational Quantum Physics

Giuseppe Carleo

Tuesday morning: V Di 10-12, U Di 13-15

One particle quantum mechanics: scattering problem, time evolution shooting technique Numerov algorithm

6

Computational Quantum Physics

Many particle systems: Fock space, etc (≈ 2 weeks theory) Hartree-Fock approximation density functional theory and electron structure (He & H2) strongly correlated electrons Hubbard and T-J models

7

Computational Quantum Physics

Lanczos method Path integral Monte Carlo Bosonic world lines Variational Monte Carlo Density-Matrix Renorm. Group Fermions, QFT

8

Molecular & Materials Modelling

Daniele Passerone and Carlo Pignedoli Friday afternoon; V Fr 14-16, U Fr 16-18

Empirical potentials and transition rates Bio-force fields, charges, peptides Embedded atom models, Wilff‘s theorem Pair-correlation function with MD for neutron scattering

9

Melting temperature from phase coexistence MO-theory, basic SCF, chemical reactions Density functional theory, pseudopotentials DFT on realistic systems, hybrids Linear scaling, GPW Electronic spectroscopies, STM Bandstructure, graphene, free energies

Molecular & Materials Modelling

13

Plan of this course

• 24.02. Statistical Physics, recapitulation MC • 03.03. Multi-spin-coding, dynamical scaling • 10.03. Glauber and Kawasaki dynamics • 17.03. Microcanonical simulations, Binder

…….............................…...cumulants, 1st oder transitions (Potts)

• 24.04. Cluster algorithm, histogram methods • 31.04. MC Renormalization Group,

…..................................…..parallelization and vectorization

14

Plan of this course

• 07.04. Molecular Dynamics, Verlet scheme • 14.04 and 21.04 ETH vacations • 28.04. Linked cell, Ewald sums, particle-mesh

• 05.05. Reaction field, Lagrange multipliers, ……………….………..…rigid bodies, quaternions

• 12.05. Nosé-Hoover thermostat, stochastic ……………….………..…method, constant pressure ensemble

• 19.05. Event driven, inelastic collisions, friction • 26.05. Contact dynamics • 02.06. ab initio MD, Car – Parinello

15

Prerequisites

• Introduction to Computational Physics • Ability to work with UNIX • Making of Graphical Plots • Some experience with C++ (or similar) • Statistical Analysis (Averaging,

Distributions) • Basic Statistical Physics

16

Literature • H.Gould and J. Tobochnik: „Introduction to Computer

Simulation Methods“ (Wesley, 1996) • D. Landau and K. Binder: „A Guide to Monte Carlo

Simulations in Statistical Physics“ (Cambridge, 2000) • D. Stauffer, F.W. Hehl, V. Winkelmann and J.G.

Zabolitzky: „Computer Simulation and Computer Algebra“ (Springer, 1988)

• K. Binder and D.W. Heermann: „Monte Carlo Simulation in Statistical Physics“ (Springer; 1997)

• N.J. Giordano: „Computational Physics“ (Pearson, 1996) • J.M. Thijssen: „Computational Physics“ (Cambridge,

1999) • M. P. Allen and D.J. Tildesley: Computer Simulation of

Liquids (Oxford 1987)

19

Classical Statistical Mechanics

We consider a many body system of N classical particles i each having n degrees of freedom pi

(j) (discrete or continuous). One configuration X is given by X = pi

(j), i = 1,…N, j = 1,…n. The set of all possible configurations is called the „phase space“.

20

Hamiltonian

The time evolution of the system should be described by a Hamiltonian H (that should not explicitely depend on time) through the Liouville equation:

ρρ ,),( HtXt

−=∂∂

where ρ is the distribution of configurations.

21

Thermal equilibrium

0=∂∂

tρThe steady state of this equation:

defines the „thermal equilibrium“.

)()(1 XXQQX

ρ∑Ω=

The thermal average over a quantity Q is

where Ω is the volume of the phase space.

22

Ensembles

• Microcanonical ensemble: fix E, V, N • Canonical ensemble: fix T, V, N • Grandcanonical ensemble: fix T, V, μ • Canonical pressure ensemble: fix T, p, N

We can fix either volume V or pressure p, either energy E or temperature T, either particle number N or chemical potential μ, either magnetization M or magnetic field H.

23

E(X) = energy of configuration X is fixed and probability for system to be in X is equal for all E:

Microcanonical Ensemble

))((1)( EXHZ

Xpmk

eq −= δ

[ ]∑ −=−=X

mk EXHTrEXHZ ))(())(( δδ

Zmk is the partition function:

24

Temperature T is fixed and the probability for system to be in X given by Boltzmann factor: E(X) = energy of configuration X

Canonical Ensemble

kTXE

Teq e

ZXp

)(1)(−

=

∑−

=X

kTXE

T eZ)(

∑ =X

eq Xp 1)(ZT is the partition function:

∑−

=X

kTXE

T

eXQZ

TQ)(

)(1)(Thermal average of quantity Q is:

25

The Ising Model

• Magnetic Systems • Opinion models • Binary mixtures

Ernst Ising (1900-1998)

Spins on a lattice

26

The Ising Model

Nii ,....1 ,1 =±=σ

∑∑=

−−==N

ii

N

nnjiji HJE

1:,σσσ H

Binary variables:

on a graph of N sites interacting via the Hamiltonian:

27

Order parameter

0 1

1( ) limN

s iH iM T

→=

= ∑spontaneous magnetization:

ordered phase

disordered phase critical temperature

( )s cM T T β∝ −

β = 1/8 (2d) β ≈ 0.326 (3d)

s

28

Response functions

γχ −

=

−∝∂∂

= cHT

TTHMT

0,

)(

,

( )v cV H

EC T T TT

α−∂= ∝ −∂

susceptibility:

specific heat:

both diverge at Tc

29

Response as fluctuation Derive fluctuation-dissipation theorem for the susceptibility:

01

01

1

0

( ) 0

0, :

( , )( )

1with ,

N

ii

N

ii

T

N H

iX i

HH

X

Z H HN

i ji j nn

eM T HT

H He

JkT

β σ

β σ

σχ

β σ σ β

=

=

+

=

+=

= =

∑∂ ∂

= =∂ ∂ ∑

= =

∑∑

H

H

H

30

Fluctuation-dissipation theorem

( )

00 1

1

2

2

11

2

00

22

( )( ) ( )

( ) ( ) ( )

NN

ii i

i

N HN H

ii X i

X i

T T

HH

eeT

Z H Z H

T M T M T

β σβ σ β σβ σ

χ

χ β

==

++

==

==

∑∑

= −

⇒ = −

∑∑∑ ∑H

H

0)( ≥⇒ Tχ

22 2V BC k E Eβ = −

Analogously one can show for the specific heat:

These formulas are used in MC simulations.

31

Specific heat

α−−∝ cv TTTC )(

α = 0 (2d) α ≈ 0.11 (3d)

comparing with experimental data for binary mixture

32

Susceptibility

γχ −−∝ cTTT )(

γ = 7/4 (2d) γ ≈ 1.24 (3d)

numerical data from a finite system

33

Correlation length )()0()( RRC σσ=correlation function:

2−

∝ +( )R

C R M ae ξ

For T ≠ Tc and for large R:

where ξ is the correlation length.

T > Tc

T < Tc

34

Correlation length

νξ −−∝ cTT

η−−∝ dRRC 2)(

The correlation length diverges at Tc as:

with a critical exponent ν. ν = 1 (2d) ν ≈ 0.63 (3d)

At Tc we have for large R:

with η = 1/4 (2d) η ≈ 0.05 (3d)

35

Exponent relations

γνηνα

γβα

=−=−

=++

)2( 2

22d

Exponents are related through:

scaling

hyperscaling

so that only two exponents are independent. H.E.Stanley, „Introduction to Phase Transitions and Critical Phenomena“ (Clarendon, Oxford, 1971)

36

Monte Carlo Method

Nicholas Constantine Metropolis

Stanislaw Ulam

37

Simulates an experimental measuring process with sampling and averaging. Big advantages: Systematic improvement by increasing the number of samples M. Error goes like:

Monte Carlo Method (MC)

M1

∝∆

38

MC strategy

• Choose randomly a new configuration.

• If the „equilibrium condition“ is not fulfilled then reject, otherwise accept.

• Calculate physical properties and add to the averaging loop.

39

Problem of sampling

∑>=<X

eq XpXQTQ )()()(

The distribution of average energy <E> gets sharper with increasing size.

Choosing configurations equally distributed over energy would be very ineffective.

40

M(RT)2 algorithm

N.C. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller and E. Teller (1953)

importance sampling through a Markov chain: X1 → X2 → ….. which goes into a steady state in which the probability for a configuration is pst(X).

Markov chain: Xi only depends on Xi-1.

41

1. Ergodicity: One must be able to reach any configuration Y after a finite number of steps.

2. Normalization:

3. Reversibility:

Properties of Markov chain

1( )Y

T X Y→ =∑

Start in configuration X and propose a new configuration Y with probability T(X→Y).

( ) ( )T X Y T Y X→ = →

42

Transition probability The proposed configuration Y will be accepted with probability A(X→Y). Transition probability of the Markov chain is:

W(X→Y) = T(X→Y) ⋅ A(X→Y)

Master equation:

( , ) ( ) ( ) ( ) ( )Y Y

dp X t p Y W Y X p X W X Ydt

= → − →∑ ∑

43

Properties of W(X→Y)

• Ergodicity:

• Normality:

• Homogeneity:

0, : ( )X Y W X Y∀ → >

1( )Y

W X Y→ =∑

→ =∑ ( ) ( ) ( )st stY

p Y W Y X p X

44

In stationary state one should have equilibrium distribution (Boltzmann).

Detailed balance ( , ) ( ) ( ) ( ) ( )

Y Y

dp X t p Y W Y X p X W X Ydt

= → − →∑ ∑

0= ⇔ =( , ) ( ) ( )st eq

dp X t p X p Xdt

( ) ( ) ( ) ( )eq eqY Y

p Y W Y X p X W X Y⇒ → = →∑ ∑sufficient condition is detailed balance:

( ) ( ) ( ) ( )eq eqp Y W Y X p X W X Y→ = →

46

Metropolis (M(RT)2)

1( )

( ) min ,( )

eq

eq

p YA X Y

p X

→ =

1 ( )

( )E X

kTeq

T

p X eZ

−=

1 1− ∆

− − → = =

( ) ( )

( ) min , min ,E Y E X E

kT kTA X Y e e

Boltzmann:

if energy decreases always accept increases accept with probability

47

MC of the Ising Model

• Choose one site i (having spin σi). • Calculate ΔE = E(Y) - E(X) = 2Jσihi. • If ΔE ≤ 0 then flip spin: σi → -σi. • If ΔE > 0 flip with probability exp(-βΔE)

Single flip Metropolis:

σ= ∑

i jnn of i

hwhere hi is the local field at site i

, :

N

i ji j nn

E J σ σ= − ∑

48

Implementation on computer

4

10 2 4

2 0 4 8

( ) , ,

, ,

i jj nn of i

i i

h

E hJ

σ

σ

=

= = ± ±

∆⇒ = = ± ±

Look-up tables: Consider the Ising model on a square lattice

Since for ΔE ≤ 0 we accept move with probability 1 we only need to store two values: 4( ) JkP k e β−= with k = ½σihi

(i.e. k = 1,2)

49

Multi-spin coding Technique to increase speed and reduce memory space for Boolean variables.

Consider Ising model on simple cubic lattice. There we have 6 nearest neighbors, i.e. the energies can have 7 different values (0,…,6). Therefore we need 3 bits per site.

One computer word has 64 bits.

50

Multi-spin coding

Use the bitwise logical functions, example: (0,1,0,0,1) XOR (1,1,0,1,1) = (1,0,0,1,0)

Store neighboring sites in different words Nj . Calculate energy of 21 sites simultaneously:

E = N XOR N1 + … + N XOR N6

Define i th site in a word (i = 1,…21): Ni = (0,…,0,1,0,…,0,0) ↑ ↑ ↑ ↑

64 3i -2 2 1 position

51

Multi-spin coding

Result: One updates 21 sites simultaneously and reduces memory requirement by a factor 21.

The „changer word“ cw is "1" if the spin is flipped and is "0" if the spin is not flipped.

7 = (0,…,0,1,1,1) is a mask to extract the last 3 bits of E through E & 7.

52

Multi-spin coding cw=0; for(i=1;i<=21;i++) z=ranf(); if(z<P(E&7))cw=(cw|1); cw=ror(cw,3); E=ror(E,3); cw=ror(cw,1); N=(N^cw);

& = AND | = OR ^ = XOR

ror = circular right shift

53

Sampling

Each time we accept a spin-flip we generate a new configuration which is however very similar to the previous one. So the samples in our Markov chain are very correlated. But we need statistically uncorrelated configurations to make averages ! We also need to get decorrelated of the initial configuration of the Markov chain.

54

Dynamic interpretation of MC

( , ) ( ) ( ) ( ) ( )Y Y

dp X t p Y W Y X p X W X Ydt

= → − →∑ ∑

0( ) ( , ) ( ) ( , ) ( ( ))X X

A t p X t A X p X t A X t= =∑ ∑The time evolution of a quantity A is

with

0

( ) ( )( )

( ) ( )nlA

A t At

A t A− ∞

Φ =− ∞

Suppose that the configuration at t0 is not at equilibrium, then define „non-linear correlation function“:

55

Non-linear correlation time ln ΦA(t)

-0.4

-0.3

-0.2

-0.1 slope = -1/τnlA

T ≠ Tc

nl

56

Non-linear correlation time

0

τ∞

≡ Φ∫ ( )nl nlA A t dt

( )nlA

tnlA t e τ

Φ =

Define the non-linear correlation time τnl

A as: example:

nlAnl z

A cT Tτ −∝ −Critical slowing down:

znlA is the non-linear dynamical critical exponent.

Describes relaxation towards equilibrium.

57

Linear correlation function

0( ) ( )( )AB

A t B t A Bt

AB A B−

Φ =−

With two quantities A and B in equilibrium define the „linear time correlation function“:

with

0 0 0( ) ( ) ( , ) ( ( )) ( ( ))X

A t B t p X t A X t B X t=∑

58

Auto correlation function

20 0

220 0

( ) ( ) ( )( )

( ) ( )

t t tt

t tσ

σ σ σ

σ σ

−Φ =

for example the spin-spin correlation:

If A = B we have an „auto correlation“

59

Time autocorrelation function

Φσ(t)

60

Linear correlation time

0

τ∞

≡ Φ∫ ( )AB AB t dt

( ) AB

t

AB t e τ−

Φ =

Define the linear correlation time τAB as:

example:

AB

AB czT Tτ −

∝ −Critical slowing down:

zAB is the dynamical critical exponent.

Describes relaxation in equilibrium.

61

Dynamical critical exponents

zσ = 2.16 (2d) zσ = 2.09 (3d)

α

βσσ

−=−

=−

1

nlEE

nl

zzzz

kinetic Ising model

conjectured relations:

62

Finite size effects

problem when: system size L < correlation length ξ i.e. close to the critical point:

L critical region

round-off

Tc T

63

Critical dynamics in finite sizes

( ) cL T T T νξ −= ∝ −

ABAB

zz

AB cT T L ντ −∝ − ∝

Number of discarded samples grows like power law of the system size.

at Tc

ντ =ABz

AB AL

or

64

Decorrelated configurations To calculate averages one wants to have configurations that are statistically not correlated.

• First to reach equilibrium, throw away n0 = c τnl(T) many configurations. • Then only take every neth configuration, with ne = c τ(T). • At Tc use:

0ν ν= =and nlz z

en cAL n cAL

c ≈ 3 is a safety factor.

66

Glauber dynamics

1( )

EkT

EkT

eA X Ye

∆−

∆−

→ =+

Roy C. Glauber (1963) (Nobel prize 2005)

fulfills detailed balance

67

Glauber dynamics 2

2 , 1

i i

i i

J h

i i jJ hj nn

eA he

βσ

βσ σ−

−=

≡ =+ ∑

( )( 1) ( ) zσ σ+ = − ⋅ −i i it t sign A

2

21

β

β=+

i

i

J h

i J hep

e1

1 1for

for

( )

i i

flip i ii i

pp A

σσ

= −= = − = +

1 11

1

for

for

( )

i i

no flip i ii i

pp A

σσ

− = −= − = = +

Spin-flip probability for Ising model:

Implementation using a random number z:

with

11 1

with

with

i i

i i

pp

σσ

= +

= − −

68

Heat bath method

2

2 1

i

i

J h

i J hep

e

β

β=+

11 1

with probability

with probability

i i

i i

pp

σσ

= +

= − −

with

Choose site i and set:

Is equivalent to Glauber dynamics.

69

Binary mixtures (lattice gas)

Consider two species A and B distributed with given concentrations on the sites of a lattice. EAA is energy of A-A bond EBB is energy of B-B bond EAB is energy of A-B bond

Set EAA = EBB = 0 and EAB = 1.

⇒ Ising model with J = 1 and constant M.

70

Kawasaki dynamics

Kyozi Kawasaki

• Choose any A-B bond. • Calculate ΔE for A-B → B-A. • Metropolis: If ΔE ≤ 0 flip, else

flip with p = exp(-βΔE). • Glauber: Flip with probability

p = exp(- βΔE)/(1+ exp(- βΔE)).

zσ = 2.32 (2d) for Ising model

71

Kawasaki dynamics

Link zum Applet

72

Microcanonical Monte Carlo

Keep energy constant.

• Creutz algorithm • Q2R • Kadanoff – Swift

73

Creutz algorithm

Michael Creutz (1983)

Introduce a small energy reservoir Ed called „demon“ which can store a maximum energy Emax .

• Choose randomly a site. • Calculate ΔE for spin flip • Accept flip if:

0max dE E E−∆≥ ≥

74

Creutz algorithm

Algorithm is determinsitic, i.e. no random numbers. Algorithm is reversible, i.e. there exist no transients. Perfect for multi-spin coding and for parallelisation, in that case use one demon per processor (e.g. 64).

Obtain temperature T through the histogram P(Ed) of the energies Ed of the demons since it should follow a Boltzmann distribution:

( )dE

kTdP E e

−∝

75

Q2R

Case Emax = 0 of Creutz algorithm on square lattice G. Vichniac (1984)

( ) ( )1 1 1 1

1

1 20 2

ifif

( )

( )

ij ij ij

ij i j i j ij ij

t f x t

x

xf x

x

σ σ

σ σ σ σ− + − +

+ = ⊕

= + + +

== ≠

totalistic cellular automaton

σij = 1,0

Applet

76

Q2R

( ) ( ) ( ) ( )( ) ( ) ( )( )( )1 2 3 4 1 3 2 41t tσ σ σ σ σ σ σ σ σ σ+ = ⊕ ⊕ ∧ ⊕ ∨ ⊕ ∧ ⊕Can also be expressed as logical function:

σ σ3

σ4

σ2 σ1

i jnn

E σ σ= ⊕∑

deterministic and reversible

Energy

64 updates in about 12 cycles !!

is a conserved quantity.

77

Implementation of Q2R

σσ ˆ and Divide lattice in two sub-lattices:

( ) ( )( ) ( )

( ) ( )( ) ( )( ) ( )

1

1

( )

( ) ( )

ˆ

ˆ ˆ ˆ

i j ij nn i

i k j ik nn j j nn i

t f t tR

t f f t t t

σ σ σ

σ σ σ σ

=

= =

+ = ⊕=

+ = ⊕ ⊕

Use multi-spin coding to implement reversible bitwise logical automaton:

but not ergodic 20.27N cycles of size 20.73N

for lattice of N sites.

78

Boundary conditions

• Open boundaries: no neighbor • Fixed boundaries: neighbor with fixed spin • Periodic boundaries: define index vectors

11

1 11

if ( )

if

if ( )

if

i i LIP i

i L

i iIM i

L i

+ <= =

− >= =

finite system L × L

79

Helical boundary conditions

( )1= + −k i j L

index system as a one dimensional string

1± ±and k k L

neighbors are:

80

Finite size scaling

V.Privman (ed.) «Finite-Size Scaling and Numerical Simulations of Statistical Systems» (World Scientific, Singapore, 1990) K.Binder in Festkörperprobleme (Advances in Solid State Physics), Vol. 26, ed. P. Grosse, (Vieweg, Braunschweig, 1986), p. 133

81

Tc for finite sizes νξ −−∝= )()( 11 cTTTL

)(2 121 cTTTT −≈−

ν1

21

−∝− LTTL

T1 T2

)1()(1ν−

−= aLTLT ceff

size of critical region:

T

83

χ for finite sizes

84

Finite size scaling for χ

Maximum of χ at Tc: νγ

χ LL ∝)(max

slope = γ / ν

85

L

χ L

Finite size scaling for χ

])[(),(1ν

χνγ

χ LTTLLT c−= F

data collapse

A.E.Ferdinand and M.E Fisher (1967)

|T-Tc|

slope = - γ

Fχ is a universal scaling function.

86

Binder cumulant

Kurt Binder (1981)

4

221

3L

L

L

MU

M≡ −

CCP 2006, Corea

87

Binder cumulants

4 1

4 4 1

2 22 2 1

2

( )( )

( )

M cL

C c

LM c

L T T LMT T L

ML T T L

βν ν

ν

βν ν

ℑ −

= = ℑ −

ℑ −

At Tc this quantity is independent of L.

88

Binder cumulants

2

( ) d

L

d

L

M L

LLP M e σ

πσ

= 2L B Lk Tσ χ=

24 23L L

M M=⇒

T > Tc Magnetization follows a Gaussian distribution:

⇒ UL = 0

89

Binder cumulants

2 2( ) ( )12

( )d d

s sd

L L

L

M M L M M L

LLP M e eσ σ

πσ

− +− −

= +

24 2=L L

M M

T < Tc

The distribution of the magnetization is the superposition of two Gaussians

M. Rovere et al, J.Phys.2, p. 7009 (1990)

⇒ UL = 2/3

90

Binder cumulants 4

22

0 for 1 2 for 3 3

cL

L Lc

L

T TMU

T TM→∞

>= − →

<

91

Binder cumulants

92

Corrections to scaling 1

1( ) ( ) ( ) ....c cM T A T T A T T ββ= − + − +

1 1 , β β ν ν> <1 1

1( , ) ( ) ( ) ....M c M cM T L L T T L L T T Lβν ν ν

− = ℑ − + ℑ − +

x-

1

1

max , , 1β β βν ν ν

= −

x

( ) ( ) 1

1( ) ...c cT C T T C T Tν νξ − −= − + − +

universal correction to scaling exponents

with

93

For T < Tc the Ising model has at H = 0 a phase transition in H of first order. This means that one has a jump in magnetization ΔM and entropy ΔS and a latent heat ΔE. Consequently susceptibility and specific heat exhibit delta functions at the transition.

First order transition

94

First order transition We have hysteresis and for small systems the magnetization jumps after the „ergodic time“ Te.

M

95

2 2( ) ( )12

( )d d

s sd

L L

L

M M L M M L

LLP M e eσ σ

πσ

− +− −

= +

The distribution of the magnetization is the superposition of two Gaussians

FSS of 1st order transition

Consider times much larger than Te.

96

FSS of 1st order transition

2

2

( ) tanh( )

( )cosh ( )

χ β

βχ χβ

= +

∂= = +∂

D dL L L

dD L

L L dL

M H H M HM LM LMH

H HM L

From the distribution of the magnetization of two Gaussians one can then derive:

( 0)

χ

χ −

=

dL

dL

H LL

⇒ maximum of susceptibility: and width of the peak:

(K.Binder)

97

FSS of 1st order transition

Ising model on square lattice at kT/J = 2.1

98

The Potts model

, 1 1,

states 1,...,

σ σ σ

σ

δ δ=

=

= = − −∑ ∑i j i

i

i j nn i

q qE J HH

q = 2 corresponds to the Ising model. For q → 1 one obtains bond percolation due to the theorem of Kasteleyn and Fortuin (1969).

Potts (1952)

99

The Potts model

(a) columnar grain growth during zone annealing (b) coarsening of an eutectic microstructure (c) grain growth in a polycrystal (d) porosity in a sintered ceramics

Applications in surface science, biology, sociology, material science, QCD, etc

Material science:

100

Potts model The Potts model has a first order transition in T in 2d for q > 4 and in d > 2 for q > 2.

specific heat for q = 10 (2d)

101

Percolation

bond percolation on square lattice p is the probability to occupy a site. Neighboring occupied sites are „connected“ and belong to the same cluster.

103

Kasteleyn and Fortuin Consider Potts model on an arbitrary graph with bonds ν: Define on bond ν0 operators of Contraction C and Deletion D:

0

1 v v

vE J ε ε

= =

if endpoints are in same statewith

if endpoints are in different states

104

Kasteleyn and Fortuin

( )v

v vJ

JE X

X X X v

Z e e eβ ε

β εβ−

−−∑

= = =∑ ∑ ∑∏

( ) ( )

1

1 1 1 : :

1 (1 )

v v v v

i j i j

J J J JJ

X X Xv v v v v v

J J JC D C C D C D

Z e e e e e

Z e Z Z e Z e pZ Z Zp

β ε β ε β ε β εβ

σ σ σ σ

β β β

− − − −−

≠ ≠ ≠= ≠

− − −

= = +

= + − = − + = + −

∑ ∑ ∑∏ ∏ ∏Consider bond ν1 with endpoints i and j .

ZC and ZD are the partition functions of the graphs contracted and deleted at ν1 .

1 β−≡ − Jp e

The partition function can be transformed:

105

Kasteleyn and Fortuin

1 1(1 )

v vC DZ pZ p Z= + −

1 21 2 1 2 1 2

2 2(1 ) (1 ) (1 )v vv v v v v v D DC C C D D CZ p Z p p Z p pZ p Z= + − + − + −

For bond ν1 we found:

Now do the same also with bond ν2 . ⇒

Now do it for all edges. Then the graph is reduced to a set of separated points corresponding to connected contracted (occupied) bonds (clusters). Each can be in q different states.

# c d(1 )Z q p p= −∑ of clusters

configurationsbond percolation

of⇒ c and d are the

#s of contracted and deleted bonds.

1 β−≡ − Jp e

106

The Potts model

, 1 1,

states 1,...,

σ σ σ

σ

δ δ=

=

= = − −∑ ∑i j i

i

i j nn i

q qE J HH

q = 2 corresponds to the Ising model. For q → 1 one obtains bond percolation due to the theorem of Kasteleyn and Fortuin (1969).

Potts (1952)

107

Kasteleyn and Fortuin

... ... (1 )b

p p= −∑ # of occupied bonds # of empty bonds

bond percolation configurations

# of clustersq b

Z q=

The partition function of the q-state Potts model is :

with

This is a fundamental relation between magnetic phase transitions and geometry (percolation).

1 β−≡ − Jp e

109

Coniglio-Klein Clusters Consider a unit of all connected sites that are in the same state and remove the bonds between them with probability:

The resulting cluster of bonds is called a Coniglio-Klein cluster.

1 β−≡ − Jp e

Bill Klein Antonio Coniglio

110

Cluster algorithms

Single flip is slow for T < Tc. Probability to flip a group of s sites simultaneously is:

( )21 0

sJse β− →

i.e. it is even much smaller.

111

Cluster algorithms

#0

( , ) (1 ) (1 )c d c dp C p p q p pσ = − −∑

bond percolationon graph without

C C

C

1 β−≡ − Jp e

( ) ( )1 1 2 2 2 1( , ) ( , ) ( , ) ( , ) ( , ) ( , )σ σ σ σ σ σ→ = →p C W C C p C W C C

Probability that cluster C is in state σ0:

is independent on σ0 . Detailed balance for a change σ1 → σ2 of cluster C:

1 2( , ) ( , )σ σ=p C p Cis easy to fulfill, because

112

Cluster algorithms

( ) 21 2

1 2

( , ) 1( , ) ( , )( , ) ( , ) 2

σσ σσ σ

→ = =+

p CW C Cp C p C

Glauber:

Metropolis:

( ) 21 2

1

( , )( , ) ( , ) min ,1 1( , )

σσ σσ

→ = =

p CW C Cp C

i.e. choose new state always with probability ½.

i.e. always choose new state.

113

Swendsen-Wang

• Occupy bond with probability p if states are equal, otherwise do leave empty.

• Identify the clusters with Hoshen-Kopelman algorithm.

• Flip each cluster with probability ½ for Ising or choose always a new state for q > 2.

R.H. Swendsen and J.-S. Wang (1987)

( 1 )Jp e β−≡ −

114

Swendsen- Wang Critical slowing down is substantially reduced.

z ≈ 0.3 in 2d z ≈ 0.55 in 3d

Link zum Applet

115

Wolff algorithm

• Choose a site randomly. • If neighboring site is in same state add it to

the cluster with probability p. • Repeat this until every site on the boundary

of the cluster has been checked exactly once. • Choose any new state for the cluster (with

probability one).

Ulli Wolff (1989)

( 1 )Jp e β−≡ −

116

General formalism

( ) ( )( , ) ( , ) ( , ) ( , ) ( , ) ( , )p X G W X G X G p X G W X G X G′ ′ ′→ = →

( )e.g.( , ) ( ) , ( ) E X

X G XZ p X G p X p X e β−= = =∑∑ ∑

( ) ( , )( , ) ( , )( , ) ( , )

′′→ =

′+p X GW X G X G

p X G p X G

D. Kandel, E. Domany and A. Brandt (1989)

( ) ( , )( , ) ( , ) min ,1( , )

p X GW X G X Gp X G

′ ′→ =

1( , ) ( , ) ( ) with ( , )

0p X G X G V G X G

= ∆ ∆ =

detailed balance

Glauber

Metropolis

algorithm simplifies when:

117

Improved estimators

( ) 0C i ii C

M σ σ∈

= − =∑

1 0 i jσ σ

=

if in the same clusterotherwise

i, j

( )22M Mχ β= − 2 22 2

,

1 1σ σ σ= =∑ ∑i j ii j cluster

MN N

From one configuration one can already get an average over many states because one can flip any subset of clusters. For example one gets for the magnetization of one cluster:

the correlation function:

and the susceptibility:

118

Vectorization

A vectorized loop is an assembly line.

I = 1,10000 A(I) = B(I) * (C(I) + D(I))

ideal case:

= multiple instruction – single data (MISD)

119

Vectorization

Problematic: • Conditional branchings like if-statements • Indirect addressing • Short loops Examples for big vector machines: SX-9 from NEC, VP2200 from Fujitsu, S-810 from Hitachi, Y-MP from Cray

120

Vectorization of MC

• Make update in inner loop, i.e. no loops inside this loop.

• Replace if (P(I) > z) s = -s by s = s * sign ( z – P (I)).

• Use a vectorized random number generator. • If a loop does not vectorize split it up in

several loops. • Make one dimensional indexing and use

helical boundary conditions.

121

Parallelization

SIMD ↔ MIMD

shared memory ↔ distributed memory

coarse grained ↔ fine grained

Exist many different architectures:

122

Parallelization of MC

Simplest parallelization is „farming“ where each processor executes the same program with different data (SIMD). Here each processor must get a different seed for the random number generator.

123

Domain decomposition

124

Domain decomposition

Dynamic load sharing

125

126

Parallelization of MC with domain decomposition

• MC on a regular lattice is well suited for parallelization because it is local.

• Put neighbors in different sublattices. • Use standard domain decomposition and

distribute using in MPI (block,block). • Use logical mask to extract one sublattice. • Use periodic shift (CSHIFT) to get neighbors

for periodic boundary conditions.

127

Parallelization of MC

Metropolis Monte Carlo for the Ising model on the square lattice using „CMFortran“

128

MPI for MC

MPI = message passing instructions shift automatically does message passing if value is on a different processor. size automatically refers to the size of the subsystem which is on one processor.

129

MPI for MC DO n=1,iterations

DO i=1,size DO j=1,size

old_spin = spin(i,j) new_spin = -old_spin

CC -------- Get neighboring spins.C shift is a function defined to handle theC periodic boundary conditions and passing ofC data between processors.C spin1 = shift(i-1,j) spin2 = shift(i+1,j) spin3 = shift(i,j-1) spin4 = shift(i,j+1)CC -------- Sum neighboring spins to get energy.C

spin_sum = spin1 + spin2 + spin3 + spin4 old_energy = old_spin * spin_sum new_energy = - old_energy energy_diff = new_energy - old_energy

CC -------- Metropolis accept/reject step.C

IF ( ( energy_diff.LE.0 ) .OR. & ( EXP(-beta*energy_diff).GT.random() ) ) THEN

spin(i,j) = new_spin ENDIF

ENDDO ENDDO

ENDDO

130

Efficiency of Parallelization Bottleneck is communication between processors.

Gene Amdahl

fraction of parallelized time to total time

131

Histogram methods Aim is to obtain functions at one temperature from a simulation at another temperature. Z.W. Salsburg, J.D. Jacobson, W. Fickett and W.W. Wood J. Chem Phys. 30, 65 (1959) (also Ferrenberg and Swendsen, 1989)

1( ) ( ) ( ) , ( )

( ) ( ) , ( )

T T TE ET

EkT

T

Q T Q E p E Z p EZ

p E g E e g E−

= =

= =

∑ ∑

density of states

132

Histogram methods

*

*

* 1( ) ( ) ( )T

ET

Q T Q E p EZ

= ∑

**

* ( ) ( ) ( )E EE

kTkTkTTT

p E g E e p E e − +− = =

*

*,( )

− + ≡

E EkTkT

T Tf E e

*

*

,*

,

( ) ( ) ( )( )

( ) ( )=∑∑

T T TE

T T TE

Q E p E f EQ T

p E f E

We want to calculate:

Defining:

we obtain:

133

Problem of sampling

< >=∑( ) ( ) ( )TE

Q T Q E p E

The distribution of energy E around the average < E >T gets sharper with increasing size.

134

Broad histogram method The problem of the method before is that the values of Q(E) were sampled close to the maximum of pT(E) which for large systems is very peaked. If T and T * are not too close the overlap between the distributions is very small so that very few configurations are sampled around the maximum of T *. Consequently one has very bad statistics. Solution → Broad histogram method

(de Oliveira, Penna, HJH, 1996)

135

Broad histogram method

Make Markov process in energy space. Be Nup the number of all processes that increase the energy: E → E + ΔE and Ndown the number of processes that decrease the energy: E → E - ΔE. Then the equivalent condition to detailed balance to reach a homogeneous steady state would be:

( ) ( ) ( ) ( )down upg E E N E E g E N E+ ∆ + ∆ =

136

Broad histogram method Metropolis:

Choose a new configuration for instance by flipping randomly a spin. If E → E – ΔE then accept , if E → E + ΔE then accept with probability: ( )

( )down

up

N E EN E

+ ∆

Check for each site of a configuration if a change of state would increase or decrease the energy ⇒ Nup and Ndown.

137

Broad histogram method

( ) ( ) ( ) ( )down upg E E N E E g E N E+ ∆ + ∆ =

log ( ) log ( ) log ( ) log ( )

( )log ( ) 1 log( )

up down

up

down

g E E g E N E N E EN Eg E

E E N E E

+ ∆ − = − + ∆

∂⇒ =

∂ ∆ + ∆

Take logarithm, divide by ΔE and consider small ΔE:

138

Broad histogram method

Ising model on square lattice

( )log ( ) 1 log( )

∂=

∂ ∆ + ∆up

down

N Eg EE E N E Euse:

⇒ g(E )

139

Broad histogram method

Ising model on square lattice

140

Broad histogram method Choose a site randomly and change state if energy is decreased and if energy would be increased change with probability Ndown/Nup. At each step one accumulates the values for Nup(E ), Ndown(E ) and Q(E ). Finally calculate:

( ) ( )( )

( )

EkT

EEkT

E

Q E g E eQ T

g E e

−= ∑

141

Broad histogram method Ising model on square lattice

32 × 32 lattice crosses = BHMC circles = usual histogram continuous line = exact

142

Broad histogram method

( ) ( )

0.701

c

c

EkT

T

c

p E g E e

T

=

BHM has been particularly useful for first order transitions.

(F.Wang and Landau, 2001)

q = 10 Potts model on a square lattice

( )EcTp

143

Histogram methods

• Multiple histogram method • Multcanonical MC • Flat histograms • Umbrella sampling • ....

Other variants:

144

Flat Histogram

• Start with g(E) = 1 and set f ≡ e. • Make MC update with p(E) = 1/g(E). • If attempt succesful at E: g(E) = f · g(E). • Obtain a histogram of the energies H(E). • If H(E) flat enough, then . • Stop when .

„Flatness“ can be measured as the ratio of the minimum to maximum value, „enough“ could be multiple of f .

f = f81 10−≈ +f

Jian-Sheng Wang (1999)

145

Umbrella sampling Torrie and Valleau (1977)

In order to overcome energy barriers Multiply transition probability with function which is large at the barrier and later remove this function again at the averaging.

( ) ( )( )

( )( )

/ ,

1/

E CkT

wE C

wkT

C

A ww C ep C A

ww C e

−= =

Glenn Torrie John Valleau

146

Other Ising-like models • Antiferromagnetic models:

• Ising spin glass:

• ANNNI model:

• Metamagnets:

, :ij i j

i j nnJ σ σ= ∑H

1 2 2, : :

i j i ii j nn i nnn

J Jσ σ σ σ += − +∑ ∑H

, :( 1)i

i j ii j nn i

J Hσ σ σ= + −∑ ∑H

1 2, : , :

i j i j ii j nn i j nnn i

J J Hσ σ σ σ σ= − −∑ ∑ ∑H

staggered field

random interaction

frustration in x-direction

incomensurate phases with „Lifshitz point“

tricritical point

147

The ANNNI model

148

The O(n) model

( ),

with =1, ,x x yXY i j x i i i i i

i j nn iJ S S H S S S S S

=

= − − =∑ ∑

H

( )1 11

,with 1, ,..., n

n vector i j i i i i ii j nn i

J S S H S S S S S−=

= − − = =∑ ∑

H

( ),

with =1, , ,x x y zHeisenberg i j x i i i i i i

i j nn iJ S S H S S S S S S

=

= − − =∑ ∑

H

O(n) model:

n = 1 is the Ising model, n = 2 is the XY-model:

n = 3 is the Heisenberg model:

n = ∞ is the „spherical model“.

149

Phase transitions Mermin-Wagner theorem (1966): In two dimensions a system with continuous degrees of freedom and short range interactions has no phase transition which involves long range order.

Heisenberg model in three dimensions:

150

Continuous degrees of freedom

Phase space is not discrete anymore. Monte Carlo move: Choose for site i a new spin:

with small random , i i iS S S S S S′ = + ∆ ∆ ∆ ⊥

Use cluster methods by making a projection on a plane. Then flipping means a reflection with respect to this plane.

151

Continuous degrees of freedom

Broad histogram method for the XY model in 3d.

152

Real space Renormalization

At critical point we have scale invariance. Renormalize system by changing scale by l.

LL l=

Niemeijer and van Leeuwen (1976)

System size changes by l variables (spins) must be redefined, new effective interactions ⇒ new Hamitonian but free energy density stays constant .

153

Real space Renormalization

dcF H l F H T Tε ε ε− =with

( , )= ( , ) -

,

,

,

T H

T H

T H

y yd

y y

y y

F H l F l l H

F H F l l H

l H l H

ε ε

ε ε

ε ε

( , )= ( )

( , )= ( )

= =

homogeneous scaling law close to critical point:

renormalized The free energy is extensive and therefore to keep its density constant it scales as:

154

Real space Renormalization νξ ε −

ν ξε ξ− =

l

1

1Tyl lν

νε ε εε= = =

-l

=Ty

correlation length:

rescaling with l:

⇒ ⇒

155

Real space Renormalization

6 2LL L l= ⇒ = =

l = 3

Example: majority rule

1

sign

i

iii cell

σ

σ σ∈

= ±

=

156

Real space Renormalization

Example: renormalization by decimation

2=l

The renormalized Hamiltonian also has next-nearest neighbor interactions.

157

Decimation of 1D Ising Model

)

158

Decimation of 1D Ising Model

159

Decimation of 1D Ising Model

160

2=l

Next-nearest neighbor interaction induced by the decimation of a site.

Proliferation of interactions

In general the renormalized Hamiltonian has longer range interactions than the original one. A simple example is decimation on the square lattice:

161

Real space Renormalization

1( )

M

i ki k c

K O Oα

α α αα

σ σ += ∈

= =∑ ∑∏withH, :

H σ σ= ∑ i ji j nn

K

( ( )) ( )

( , ) ( , ) 1Ge P e Pσ σ

σ σ

σ σ σ σ+ = =∑ ∑with

H H

( ( )) ( )

GZ c Z e eσ σ

σ σ

+= ⇔ =∑ ∑

H H

general Hamiltonian ( K = - J / k T ) example:

definition of renormalized Hamiltonian:

in order to fulfill the conservation of free energy density:

162

Real space Renormalization

1( )

M

i ki k c

K O Oα

α α αα

σ σ += ∈

= =∑ ∑∏with

H

* * *1( ,..., )MK K K Kα α=

ααβ

β

∂=∂

KTK

* *( )K K T K Kα α αβ β ββ

− = −∑

*K

1( ,..., ) , 1,...,MK K K Mα α =

renormalized Hamiltonian

renormalization:

critical point is fixed point:

Jacobi matrix linearization of transformation at K*:

163

Real space Renormalization Flow diagram of renormalization

fixed point (K1*, K2

*) K1

K2

164

Real space Renormalization 1 1 M,..., ,..., Mλ λ φ φ and

1αλ >

* αβ KT

α α αφ λ φ=

1 ln ln

ε ε λ νλ

= ⇒ = ⇒ = =

T Ty yT

T T

ll ly

eigenvalues and eigenvectors of

relevant eigenvalue ⇒ fixed point unstable

calculate critical exponents through:

scaling field

165

MCRG

β ββ

β ββ

ασα

ασ

∑∂

= =∑ ∂∑∑

K O

K O

O e FOKe

= Monte Carlo Renormalization Group Ma (1977), Swendsen (1979)

ln=F Z free energy

measure:

166

MCRG

ααβ α β α β

β

ααβ α β α β

β

χ

χ

∂≡ = −

∂≡ = −

OO O O O

K

OO O O O

K

and measure the response functions:

167

MCRG

, αααβ αβ

β β

χ χ∂∂

≡ ≡∂ ∂

OOK K

( ) ( )( ) ( 1)

n nn n

O OKT

K K Kα αγ

αβ γβ αγγ γβ β γ

χ χ +∂ ∂∂

= = =∂ ∂ ∂∑ ∑

using the chain rule we obtain for the n-th iteration:

⇒ we obtain Tγβ from the correlation functions by solving a set of M coupled linear equations.

168

Strategy in MCRG

renormalization renormalization

169

Example for MCRG

ν of 4-state Potts model in two dimensions

170

Errors in MCRG

• Statistical • Truncation of Hamiltonian • Finite number of iterations • Finite size • Imprecision in K*