293

To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566
Page 2: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566
Page 3: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

To my Father and Mother,who taught me that knowledge is to be revered.

Page 4: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566
Page 5: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Digital Online Companion

The Digital Online Companion contains advanced and in-depth topics ar-ranged in parallel chapters to the print edition (ISBN: 978-1-118-11022-5,Wiley, 2015). It is intended to provide closely-related but optional materialat upper undergraduate or graduate levels.

Throughout, references to the print edition are indicated by the “A:”prefix, including the completely merged index. Other electronic resourcessuch as programs, data files, solution manual to select exercises, feedbackand contact information, and installation instruction and files can be foundat

http://www.wiley.com/college/wang/

https://github.com/com-py/compy/

http://www.faculty.umassd.edu/j.wang/

i

Page 6: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Contents

1 Introduction 1

2 Free fall and solutions of ODEs 3

3 Realistic projectile motion with air resistance 5

3.1 Exercises and Projects . . . . . . . . . . . . . . . . . . . . . 53.A Approximate formulas for the Lambert W function . . . . . 7

4 Planetary motion and few-body problems 11

4.1 Exercises and Projects . . . . . . . . . . . . . . . . . . . . . 11

5 Nonlinear dynamics and chaos 19

5.1 The kicked rotor and the stadium billiard . . . . . . . . . . . 195.2 Exercises and Projects . . . . . . . . . . . . . . . . . . . . . 25

5.A Renormalization and self-similarity . . . . . . . . . . . . . . 285.B Fast Fourier transform (FFT) . . . . . . . . . . . . . . . . . 30

5.C Program listings and descriptions . . . . . . . . . . . . . . . 44

6 Oscillations and waves 476.1 The hanging chain and the catenary . . . . . . . . . . . . . . 47

6.2 Exercises and Projects . . . . . . . . . . . . . . . . . . . . . 536.A Gauss elimination and related methods . . . . . . . . . . . . 55

6.B Program listings and descriptions . . . . . . . . . . . . . . . 57

ii

Page 7: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

CONTENTS iii

7 Electromagnetic fields 657.1 Equilibrium of charges on a sphere . . . . . . . . . . . . . . 657.2 Exercises and Projects . . . . . . . . . . . . . . . . . . . . . 69

8 Time-dependent quantum mechanics 758.1 Scattering and split evolution operator . . . . . . . . . . . . 758.2 Quantum transitions and coupled channels . . . . . . . . . . 868.3 Exercises and Projects . . . . . . . . . . . . . . . . . . . . . 1008.A Theory of Gaussian integration . . . . . . . . . . . . . . . . 1078.B Profiling code execution . . . . . . . . . . . . . . . . . . . . 1098.C Coupled channels in real arithmetic . . . . . . . . . . . . . . 1108.D Program listings and descriptions . . . . . . . . . . . . . . . 112

9 Time-independent quantum mechanics 1159.1 Energy level statistics . . . . . . . . . . . . . . . . . . . . . . 1159.2 Quantum chaos . . . . . . . . . . . . . . . . . . . . . . . . . 1179.3 Exercises and Projects . . . . . . . . . . . . . . . . . . . . . 1269.A Program listings and descriptions . . . . . . . . . . . . . . . 135

10 Simple random problems 13710.1 Game of life . . . . . . . . . . . . . . . . . . . . . . . . . . . 13710.2 Traffic flow . . . . . . . . . . . . . . . . . . . . . . . . . . . 13910.3 Ants raiding patterns . . . . . . . . . . . . . . . . . . . . . . 14210.4 Exercises and Projects . . . . . . . . . . . . . . . . . . . . . 14510.A Program listings and descriptions . . . . . . . . . . . . . . . 149

11 Thermal systems 15311.1 Thermal relaxation of a suspended chain . . . . . . . . . . . 15311.2 Particle transport . . . . . . . . . . . . . . . . . . . . . . . . 16011.3 Bose-Einstein condensation . . . . . . . . . . . . . . . . . . . 17011.4 Exercises and Projects . . . . . . . . . . . . . . . . . . . . . 17711.A Mean field approximation of 2D Ising model . . . . . . . . . 18411.B Program listings and descriptions . . . . . . . . . . . . . . . 187

12 Classical and quantum scattering 19512.1 Orbiting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19512.2 Green’s function method . . . . . . . . . . . . . . . . . . . . 19812.3 Scattering at low and high energies . . . . . . . . . . . . . . 202

Page 8: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

iv CONTENTS

12.4 Inelastic scattering and atomic reactions . . . . . . . . . . . 21112.5 Classical dynamics of atomic reactions . . . . . . . . . . . . 22112.6 Exercises and Projects . . . . . . . . . . . . . . . . . . . . . 23712.A The phase shift integral . . . . . . . . . . . . . . . . . . . . 25112.B Direct determination of cross sections . . . . . . . . . . . . . 25312.C WKB scattering wave functions . . . . . . . . . . . . . . . . 25412.D The Born T -Matrix . . . . . . . . . . . . . . . . . . . . . . . 25512.E The microcanonical ensemble . . . . . . . . . . . . . . . . . 26012.F Time-dependent leapfrog method . . . . . . . . . . . . . . . 26112.G Program listings and descriptions . . . . . . . . . . . . . . . 262

Bibliography 267

Index 271

Page 9: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Chapter 1

Introduction

No additional material.

1

Page 10: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566
Page 11: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Chapter 2

Free fall and solutions ofordinary differential equations

No additional material.

3

Page 12: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566
Page 13: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Chapter 3

Realistic projectile motionwith air resistance

We give approximation formulas for the evaluation of the Lambert-W func-tion in Section 3.A.

3.1 Exercises and Projects

Exercises

E3.1 Study and implement two root-finding methods described below andtest them on f(x) = 5x− x3.(a) The secant method is similar to Newton’s method but requiresno knowledge of derivatives. It works as follows. Starting with twoinitial guesses near a root, x0 and x1, which do not necessarily haveto bracket the root, a straight line is drawn between the two points(x0, f0) and (x1, f1). Let x2 be the intersection of this line with the xaxis (extrapolate if necessary). Then x2 satisfies

x2 = x1 −x1 − x0f1 − f0

f1. (3.1)

5

Page 14: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

6 Chapter 3. Realistic projectile motion with air resistance

Using x1 and x2 as the new seeds, we can get x3. We repeat theprocess as

xn+2 = xn+1 −xn+1 − xnfn+1 − fn

fn+1, n ≥ 0, (3.2)

until the interval is sufficiently small. We note that the secant methoddoes not guarantee convergence.

(b) The method of false position is a combination of secant and bi-section methods. The initial points x0 and x1 must bracket the root.Instead of the midpoint in bisection, xn+2 is obtained in the sameway as in the secant method from Eq. (3.2). Then the same test asin the bisection method (A:3.55) is applied to decide whether xn orxn+1 should be replaced by xn+2 so that the root remains bracketedbetween the new seeds. The process is repeated until the intervalis sufficiently small. Like the bisection method, the method of falseposition never fails, and converges faster.

E3.2 One-dimensional free fall with quadratic air resistance is special casethat yields analytic solutions. Derive the analytic expressions for yand vy. Assume initial conditions y = 0, vy = 0.

Projects

P3.1 Even though projectile motion with the more realistic quadratic airresistance has no analytic solutions in general (see a special case inExercise E3.2), a low-angle trajectory (LAT) approximation leads toclosed-form solutions similar to those found with linear air resistance(Section A:3.4).

In LAT approximation, we assume the horizontal velocity is muchlarger than the vertical one, vx vy, such that the speed can bewritten as v =

√v2x + v2y ' vx. This enables us to linearize the

equations of motion from Eq. (A:3.34) as

dx

dt= vx,

dy

dt= vy

dvxdt

= −b2mv2x,

dvydt

= −g − b2mvxvy. (3.3)

Page 15: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

3.A. Approximate formulas for the Lambert W function 7

Unlike in Eq. (A:3.34), the lack of square root makes the problemsolvable analytically.

(a) Show that the solutions to Eq. (3.3) are

x =1

bln(1 + αt), y = (v0y +

g

2α)ln(1 + αt)

α− 1

4gt2 − gt

2α, (3.4)

where b = b2/m and α = bv0x.

(b) Solve Eq. (3.4) to show that the height H and range R are

H =tan θ

2b

[1 + β

βln(1 + β)− 1

]

, β = bv2 sin 2θ

g, (3.5)

R = − 1

2b

[

W (z) +1

1 + β

]

, z = − 1

1 + βexp(− 1

1 + β). (3.6)

(c) Write a program to numerically solve the full and linearized equa-tions, (A:3.34) and (3.3), respectively. Use parameters v0 = 10 m/s,b = 0.1 1/m, and θ = 20. Plot and compare x− t, y − t, and y − xcurves. Discuss qualitatively the shapes of the trajectories in com-parison to those in linear air resistance (results of Project A:P3.2 ifavailable, or Figure A:3.5).

(d) Compute the height and range from Eq. (3.5) and (3.6). Comparewith the numerical results above, and with the ideal projectile range(A:3.2). Carry out the comparison at somewhat larger angles, e.g.40 and 60.

(e) Make a prediction of the range as a function the initial angle θbetween [0, 90]. Sketch it out. Then plot the actual R − θ curvefrom Eq. (3.6), making sure to use enough data points so the curveis smooth. Discuss your prediction and the results. Comment on thesymmetry of the curves. Does it surprise you?

3.A Approximate formulas for the Lambert

W function

We present approximate formulas as alternative means of evaluating thereal Lambert W function. The first one, Eq. (3.7), works in the regular

Page 16: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8 Chapter 3. Realistic projectile motion with air resistance

region of finite argument, and the second one, Eq. (3.9), in the asymptoticregion. Together, they can be used to evaluate W0 and W−1 in all regions.The values should be accurate to 8 significant digits or better.

The regular region

We first give the formulas optimized in the regular region where the argu-ment is finite. It is based on rational approximation by polynomials. Thegeneral form of the expressions is

W0,−1(x) = C + r × P (r), (3.7)

where C is a constant and P (r) is the Pade approximant defined as

P (r) =a0 + a1r + a2r

2 + a3r3 + a4r

4

1 + b1r + b2r2 + b3r3 + b4r4. (3.8)

The expansion variable r is related to the independent variable x. Wegive in Table 3.1 the constant C, the variable r, and the coefficients aiand bi. To utilize the table, locate which function and region to use, thenevaluate (3.7) with the corresponding C, r, and ai, bi.

Table 3.1: Parameters for the evaluation of theW with Eqs. (3.7) and (3.8).Spaces are inserted in the coefficients for readability. For optimal precision,all the digits should be used.

Function W0 W−1

Region x ∈ [−e−1,−0.16) x ∈ [−0.16, 0.32) x ∈ [0.32, 2.2) x ∈ [−e−1,−0.12)Const. C -1 0 0.3906 4638 -1

Variable r√

−2 ln(−ex) x ln(√3x)

−2 ln(−ex)Coeffi. a0 1 1 0.2809 0993 -1

a1 -0.8040 7820 4.674 4173 0.1116 7016 -0.8178 4020a2 0.2802 9706 6.577 4227 0.0 3529 1013 -0.2889 3422a3 -0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566 1458b1 -0.4707 4486 5.674 4173 0.1389 8485 0.4845 0686b2 0.0 9560 4321 10.75 1840 0.0 7995 0768 0.0 9965 4140b3 -0.00 6612 4586 7.637 5538 0.00 4515 2166 0.00 7066 1014b4 0.7961 1402×10−5 1.539 0142 0.000 6368 7954 0.5596 5023×10−5

Page 17: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

3.A. Approximate formulas for the Lambert W function 9

The asymptotic region

We approximate the Lambert W function by asymptotic expansion in theregion x 1 for W0, and x ∼ 0− for W−1. As it turns out, both W0 andW−1 have the same form in the asymptotic regions,

W0,−1(x) = q + (q + q2)× r × (1 + a1r + a2r2 + a3r

3 + a4r4), (3.9)

p = ln(x/ ln |x|), q = ln(x/p), r = ln(p/q)/(1 + q)2,

a1 = 1/2, a2 = (1− 2q)/6, a3 = (1− 8q + 6q2)/24,

a4 = (1− 22q + 58q2 − 24q3)/120.

This expression should be used for W0 in the region x ∈ [2.2,∞), and forW−1 in x ∈ [−0.12, 0). The value of the argument x automatically selectsthe correct branch.

Page 18: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566
Page 19: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Chapter 4

Planetary motion andfew-body problems

4.1 Exercises and Projects

Exercises

No additional exercises.

Projects

P4.1 The helium atom consists of two electrons moving in the field ofthe nucleus of charge Z. The mass of the nucleus is nearly 2000times larger than the mass of an electron, so it is a good approx-imation to regard the nucleus as being fixed in space at the ori-gin. With this approximation and from the Coulomb forces withk = −q1q2/4πε0, ε0 = 8.85× 10−12 C2/Nm2 in Eq. (A:4.1), the equa-tions of motion for electrons in atomic units (e = m = ~ = 1, see

11

Page 20: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12 Chapter 4. Planetary motion and few-body problems

Table A:8.1) can be written as

d~r1dt

= ~v1,d~r2dt

= ~v2,

d~v1dt

= −Z~r1r31

+~r12r312

,d~v2dt

= −Z~r2r32

+~r21r321

.

(a) Write a program to simulate the motion of the electrons in helium(Z = 2) using the leapfrog method. The derivative function should besimilar to threebody() in Program A:4.5, except only the two activeelectrons need to be followed. Modify the animation code accordingly.Run the program with the following initial condition for the so-calledWannier orbit,

~r1 = [−1.205071, 0], ~r2 = [1.205071, 0],

~v1 = [0, 0.84355], ~v2 = [0,−0.84355].

For the Langmuir orbit, the initial condition is

~r1 = [−1/3, 1.564893], ~r2 = [1/3, 1.564893],

and all velocities are zero. Describe the observed motion.

(b) Study the energy dependence of the Wannier orbits. Multiplythe initial velocities by a scale factor s but keep the initial positionsunchanged. This is equivalent to changing the energy of the system.Try a few values such as s = 0.5, 1.0, 1.1. Observe how the orbitalshape and periods change with s. If you increase s toward 1.414,what do you observe? Explain your observations.

(c) Examine the stability of the so-called Langmuir orbit. Changethe initial positions slightly but symmetrically, for example, x1 =−x2 = −0.3,−0.5, etc. Observe the orbits becoming deformed, butthe system does not break up. Now change the y values but keep x thesame (±1/3). Are there any differences? Finally, give each electron aslight vertical velocity, e.g., v1y = −v2y = 0.02. How does the systembehave? Why is it that one electron gets ejected like a slingshot fromthe nucleus?

P4.2 (a) Unlike the Kepler orbit, motion in the restricted three-body systemis directional. For each orbit shown in Figure A:4.19, plot the orbit

Page 21: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

4.1. Exercises and Projects 13

when the initial velocity is reversed. Describe the new orbits. Arethey still clockwise? Is the motion periodic? It is stable? Brieflyexplain.

(b) The periods of the orbits in Figure A:4.19 are between 13 to 30RTB time units, long compared to the orbital period of Jupiter. Orbitsof much shorter periods comparable to Jupiter’s period also exist. Thefollowing initial conditions [14] lead to orbits shown in Figure 4.1.

~r, ~v =[0.7390, 1.2817], [4.3489,−4.2138],[0.6790, 1.1778], [3.7046,−6.4865]. (4.1)

Figure 4.1: Orbits from the initial conditions (4.1).

Reproduce the orbits shown in Figure 4.1. Find the period of each or-bit. Convert it to years and compare with Jupiter’s period. Estimatethe average speed in each figure, and discuss the balance between theCoriolis effect and the centrifugal force. Are the orbits stable?

P4.3 In the discussion of the restricted three-body problem of Pluto inthe Sun-Neptune system, we showed that orbital resonance betweenPluto and Neptune played a key role in maintaining the stability ofPluto. Let us work out the technical details that helped us reach thatconclusion.

(a) First, we need to build a complete program by combining partsof Programs A:4.6 and A:4.7. I recommend that you begin with Pro-gram A:4.6, add r3body() from Program A:4.7. If you want to have

Page 22: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

14 Chapter 4. Planetary motion and few-body problems

animation effects, copy over set scene() as well. In the functionmakeplot(), delete the lines for drawing the surface plot and arrows,but keep the potential contour plot. Copy the “while” loop from Pro-gram A:4.7, make changes according to your decision on animation.Create several arrays before entering the loop for recording the posi-tions for plotting later. Inside the loop, append the (x, y) position tothe arrays after each step.

The loop should be terminated after 60 time units. After the loop,plot the position arrays in the same figure as the contour plot whichwas started with the figure() command. This is Pluto’s orbit. Runthe program with correct parameters and initial conditions. If all goeswell, most likely after a few rounds of “bug squashing”, you should seea figure with Pluto’s orbit superimposed over the potential contourssimilar to Figure A:4.20 (bottom).

With the figure displayed, use the built-in zoom feature to enlarge thecrossover loop on either side of the libration, and read off the (x, y)coordinates at the tip of the crossover loop, i.e., the point closest tothe origin (Sun). Calculate the radius at that point, and the angle itmakes relative to the vertical. The difference between 1 and the radiustells us how much Pluto moves inside Neptune’s orbit, and the angleis equal to one half of the librational angle. Express the differencein AU, and the angle in degrees. Verify that they are close to thenumbers quoted in our discussion (Section A:4.6.4).

(b) The key to understanding the librational motion of Pluto is theenergy change caused by the pull and the tug on Pluto by Jupiter.The energy can be tracked as follows. Create two arrays before theloop, one for recording time and another one for energy. Calculatethe energy of Pluto in the nonrotating (space) frame at each stepand append time and energy to the arrays. The energy consists oftwo parts, the potential and kinetic energies. The potential energy isthe actual gravitational potential energies due to the two primaries,excluding the centrifugal potential which does not exist in the nonro-tating frame. To obtain the kinetic energy, we need to transform thevelocity from the rotating frame, and add to it the rotational velocityvia Eq. (A:4.68). Transform both the velocity and the position vectorsusing the rotation matrix (A:4.73), with the angle of rotation θ = ωt.

Page 23: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

4.1. Exercises and Projects 15

Now run the program for 120 time units, which is a full librationalperiod. If you use a small time step, there will be a lot of data points,which is not necessary. You may choose to record one data point everyn number of steps, say n = 10 if h = 0.001. Remember, the numberof data points in the time and energy arrays must be the same. Afterthe loop, start a new figure and plot the energy as a function of time.You should see a sinusoidal oscillation. Determine the amplitude fromthe figure. It should be around 0.4%.

(c)∗ To relate energy to the orbital period, let us assume perfect Ke-pler’s orbit for Pluto. We can associate the instantaneous energy withthe semimajor axis a from Eq. (A:4.12). In turn, we can calculate theperiod T according to Kepler’s third law, T 2 ∝ a3. Plot the variationof the period as a function of time. Determine the amplitude of oscil-lation. How much does the period change per Pluto’s orbital period?How much does that imply in terms of the distance of separation (orapproach) between Pluto and Neptune? Discuss the meaning of theresults regarding the librational motion.

P4.4∗ One of the most spectacular sights in the sky is the appearance ofcomets such as Halley’s comet, or the recent comet ISON which disin-tegrated near the perihelion in November 2013. A comet consists of acore (nucleus), a dusty layer (coma), and sometimes a tail. The comaand the tail are formed when the comet is near the Sun. The tail hasan interesting shape and a property that it always points away fromthe Sun (see Figure 4.2). It is caused by the photoelectric effect dueto the ionizing radiation (solar wind) from the Sun.

We can simulate the motion of a comet and its tail with the elementscovered in this chapter. There are two steps. First, we calculate themotion of a comet like any planet. Second, dust particles are releasedfrom the comet at regular intervals, say every n steps. The dustparticles will form the edges of the tail. The force on these particlesdue to the solar wind may be modeled as a repulsive force

~Fwind = wGMm~r′

r′3.

This is just like the normal gravity except for the parameter w whichis a positive number to indicate that the force is repulsive. For ourpurpose, w = 10 works well.

Page 24: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

16 Chapter 4. Planetary motion and few-body problems

Figure 4.2: The shape of a comet tail at different times.

The structure of the program should be similar to Program A:4.1 orA:4.3. One strategy is as follows.

a. Write a derivative function comet(). It should be just like earth()in Program A:4.1 but with −GM replaced by wGM in the accelera-tion. The same function will be used to integrate the motion of thecomet and the dust particles in the tail by changing w. Therefore,the variable w should be global, just like GM .

b. Set up the display scene which should contain the comet, its trail,and the trail of the dust particles. Each trail is just a points

object, see set scene() in Program A:4.3.

c. Before entering the main loop, create two ndarrays for the positionand velocity of the comet, and initialize them to

~r = [0, 0.98], ~v = [−6.4,−6.0].

This initial condition corresponds to a very elliptic orbit with ec-centricity e = 0.95 and semimajor axis a = 10. Because the tail

Page 25: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

4.1. Exercises and Projects 17

has two sides (Figure 4.2), dust particles will be released in pairs,one on each side. Let N be the number of such pairs (say 20).Create two 2N × 2 ndarrays for the positions (py) and velocities(pv) of the dust particles as

py=np.zeros((2∗N,2))pv=np.zeros((2∗N,2))

d. Initiate and iterate the main loop. Inside the loop,

i. Set w = −1, and integrate the motion of the comet with theleapfrog method.

ii. Release a new pair of dust particles every n steps, one to eachside, and set their initial conditions (see below). If the numberof pairs exceeds N , the earliest pair’s positions and velocitieswill be set to the new initial conditions.

iii. Set w = +10. For each particle that has been released, inte-grate its motion with the leapfrog method.

iv. Update the trail positions of the comet and the dust particles.For the dust particles, keep the positions of the last N pairsonly. If tail is the trail of the tail, the code snippet would be

for i in range(2∗N):tail .append(pos=py[i]),retain=2∗N)

The only issue remaining is how to set the initial conditions of thedust particles at the time of release. The initial positions are just theposition of comet at that moment. For the velocities, we assume thedust particles are released with the same speed relative to the comet,but at different angles. One side of the tail is composed of particlesreleased in the radial direction of the comet, and the other side in theopposite direction, i.e., radially inward. A relative speed of vr = 1.2is recommended. The actual velocity is the vector sum of the comet’svelocity and the relative velocity. In practice, it is easiest to transformthe relative velocity via the rotation matrix (A:4.73), with the x′ axisin the radial direction. For example, for the particle in the radialdirection, v′x = vr, v

′y = 0, and θ is the angle of the radial vector of the

comet. For the particle in the opposite direction, v′x = −vr, v′y = 0.

Page 26: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

18 Chapter 4. Planetary motion and few-body problems

Once the program is running, experiment with different relative speedsvr, force strength w, release interval n, etc., to reveal the different mor-phologies. Make your own unique tail. For instance, find the relevantdata for comet ISON (eccentricity, perihelion, etc.) and simulate itsmotion.

Page 27: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Chapter 5

Nonlinear dynamics and chaos

We discuss two additional chaotic systems: the kicked rotor, and the sta-dium billiard. Furthermore, we discuss renormalization theory via an ex-ample in Section 5.A and the FFT method in Section 5.B.

5.1 The kicked rotor and the stadium

billiard

Other than the logistic map, the physical models discussed so far in Chap-ter A:5 are continuous. Below we describe two Hamiltonian systems thatare analogs of discrete maps. We discuss the main features and leave detailsof calculations to Project P5.3 and Project P5.4.

5.1.1 The kicked rotor

The kicked rotor is a simple Hamiltonian system often studied for nonlineardynamics. It is an analog of the driven pendulum, but the external pertur-bation consists of a series of “kicks”, or impulses. The equation of motionis given by

θ + κ sin θ∞∑

n=−∞δ(t− nτ) = 0, (5.1)

19

Page 28: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

20 Chapter 5. Nonlinear dynamics and chaos

where κ is the strength of a kick (perturbation). The kicks are representedby the Dirac function δ(t− nτ), graphically illustrated in Figure 5.1.

t

δ(t− nτ)

Figure 5.1: The Dirac δ kicks.

The Dirac function δ(x) is a function that is zero everywhere except atx = 0 where it spikes to infinity, i.e., δ(x) = 0 for x 6= 0, and δ(0) = ∞.It is an idealization of an impulse that is very narrow but very high, suchthat the area under the function is one. It is a convenient way to representfunctions that are sharply localized over the characteristic scale.

Therefore, the sum in Eq. (5.1) represents sudden kicks at regular in-tervals τ . As usual, we convert Eq. (5.1) to first-order equations of motionas

dt= ω,

dt= −κ sin θ

∞∑

n=−∞δ(t− nτ). (5.2)

Compared to the driven pendulum (A:5.31), the biggest difference isthat the driving force is discontinuous. We cannot use an ODE solver tointegrate Eq. (5.2). But, the solution can be obtained in a much easier way,for in between kicks, the perturbation is zero, and ω is constant. We onlyhave to connect the values of the variable before and after a kick. Let θnand ωn be the values right after t = nτ . Then we have

θn+1 = θn + ωnτ, (5.3a)

ωn+1 = ωn − κ sin θn+1. (kicked rotor) (5.3b)

Equation (5.3a) expresses the fact θ changes at constant rate ω = ωn be-tween the nth and the (n+ 1)th kicks. Equation (5.3b) tells us that ω willjump by −κ sin θn+1 right after the (n + 1)th kick, when θ = θn+1. The δfunctions have dropped out because the area is one after integration.

Equations (5.3a)–(5.3b) are now discrete. They are known as the stan-dard map. Their iterations can be carried out much like the logistic map,only simpler, it seems. However, the dynamics is anything but simple.

Page 29: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

5.1. The kicked rotor and the stadium billiard 21

−π 0 π

θ

−π

0

π

ω

−π 0 π

θ

−π

0

π

ω

Figure 5.2: The phase space trajectories of the kicked rotor for κ = 0.2(top) and 1 (bottom).

Figure 5.2 shows the phase space trajectories of the kicked rotor at twovalues of κ, the control parameter. The initial values are θ = 0 and ω evenlyspaced between [−π, π]. The graph is equivalent to Poincare maps.

As in the driven pendulum, the values of θ and ω are remapped in therange [−π, π]. For the smaller κ, the trajectories are regular lines. Werecognize the quasi-elliptic trajectories near the center as deformed pathsof the simple harmonic oscillator (Figure A:2.5).

At the higher values of κ (Figure 5.2, bottom), regular lines still exist,

Page 30: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

22 Chapter 5. Nonlinear dynamics and chaos

but there are bands of dots. The latter are due to chaotic trajectories.Surrounded by the chaotic bands are islands of regular motion, known asislands of stability. In the language describing the Poincare surface of sec-tion of the driven pendulum, we say some stable tori have been destroyed.For even larger κ, the islands of stability will shrink further, and eventuallythe whole phase space is swamped by a sea of chaos. See Project P5.3 andRef. [24] for further exploration of the kicked rotor.

5.1.2 The stadium billiard

Like the kicked rotor, the stadium billiard is a seemingly simple ballisticmodel, but it has interesting nonlinear dynamics, and is an often-studiedchaotic system. It consists of a rectangular area capped by two semicirclesat each end, shown in Figure 5.3. A particle – the billiard – moves freelyinside the stadium in a straight line. When it collides with the wall, it isreflected back elastically.

r dn

~vi

~vf

Figure 5.3: The stadium billiard (left) and the particle reflections (right).

The motion of the particle between collisions can be described by

d~r

dt= ~v,

d~v

dt= 0. (5.4)

We can follow the motion of the particle analytically as a series of bounces(a map). But it is rather tedious to do so. It is much easier to integratethe motion numerically. Euler’s method is perfectly fine since the solutionsare linear between bounces.

Eventually, the particle will jump out of bounds unless we build in acollision detection scheme. We have to check whether the particle is outsidethe stadium after each step. That part is relatively easy. The part requiringsome care is where the particle hits the wall. We need that to continue the

Page 31: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

5.1. The kicked rotor and the stadium billiard 23

integration after reflection. Though it can be found analytically, in thespirit of numerical simulation, let us do it the numerical way.

Specifically, suppose the particle is inside at step k but is found to beoutside the next step. We could choose a crude way, or an elaborate way todetermine the collision point. The crude way is to take the average positionsof the last two steps, denoted by (x, y). If the point (x, y) is inside, we takeit to be the collision point. If not, discard the last step, back up to stepk and move on, i.e., assume the last point before stepping out of boundsas the collision point. A more challenging and satisfying way is as follows.We check the distance, ∆, between the two points. If ∆ is bigger thanour tolerance (say 10−6), we discard the last step, and resume integrationfrom step k using a reduced step size, say by a factor of 10. We repeat theprocess when the inevitable collision occurs again. The distance ∆ betweenthe steps just before and after the collision will be successively reduced,until ∆ is smaller than our tolerance. We have found the collision point(x, y).

We then revert the step size back to the original value, and find thevelocity by reflection off the wall so we can continue our (rather, the par-ticle’s) journey. Reflection off the rectangular parts of the wall is easy:we just reverse the vertical velocity and keep the same horizontal velocityvfx = vix, vfy = −viy, or equivalently,

~vf = ~vi − 2(~vi · j)j. (reflection off rectangular wall) (5.5)

Reflection off the semicircular walls can be found the same way, exceptthe reflection is about the surface normal n. The final velocity will havethe same tangent component but opposite normal component, which meanssubtracting twice the normal component similar to (5.5)

~vf = ~vi − 2(~vi · n)n. (reflection off semicircular wall) (5.6)

Here n is the surface normal (Figure 5.3) given by

n =~n

n, ~n = (xc − x)i+ (yc − y)j, (5.7)

where (xc, yc) is the center of the semicircle, and (x, y) the collision pointfound above.

We have the necessary machinery to simulate the billiard model now.But where is the nonlinearity? It is a subtle question, because it depends

Page 32: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

24 Chapter 5. Nonlinear dynamics and chaos

−1.0 −0.5 0.0 0.5 1.0y (arb. unit)

−0.4

−0.3

−0.2

−0.1

0.0

0.1

0.2

0.3

0.4

v x (arb. unit)

−1.0 −0.5 0.0 0.5 1.0y (arb. unit)

−0.4

−0.3

−0.2

−0.1

0.0

0.1

0.2

0.3

0.4

v x (arb. unit)

Figure 5.4: The trajectories (top) and the Poincare maps (bottom) of thestadium billiard. The parameter d is 0 for the perfectly circular stadium(left) and 0.5 for the stretched stadium (right). The particle starts at (?)in the direction of the arrow, and ends at ().

on the stretch length d. Figure 5.4 shows the results for two different d-values. For the perfectly circular stadium (d = 0), the trajectory is regularand orderly. When the trajectory crosses itself, it happens at a fixed angle.There is a circular zone at the center the particle cannot penetrate fromreflecting off a perfect circle. The size of the forbidden zone depends on theinitial condition.

When the stadium is stretched (d = 0.5), the trajectory does not lookregular anymore. The crossings do not happen at a fixed angle, and thereis no longer a forbidden zone. We expect the motion to be chaotic.

Confirmation of regular and chaotic motion is furnished by the Poincaremap shown below each stadium in Figure 5.4. The results are obtained

Page 33: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

5.2. Exercises and Projects 25

by plotting vx as a function of y when the trajectory crosses the midfield(x = 0). In the case of regular motion (d = 0), the velocity vx is confinedto two branches (effectively one branch because of symmetry), whereas forchaotic motion (d = 0.5), no stable branches exist. We see only a chaoticsea.

Evidently, nonlinear dynamics enters the stadium billiard problem viathe parameter d, which we regard as the control parameter. A host ofinteresting features can be investigated (Project P5.4).

5.2 Exercises and Projects

Exercises

E5.1 Show that the standard map (kicked rotor, Eqs. (5.3a)–(5.3b)) is areapreserving in phase space. Follow the approach in Chapter A:2, Sec-tion A:2.A.

Projects

P5.1 Let us work through several details of the renormalization exampleand calculate rn values for the onset of period-2n cycles.

(a) Find the fixed points of the period-4 cycle, f (2)(y∗) = y∗ fromEq. (5.10), and compare with results in Eq. (5.11).

(b) Verify the Taylor series and the constant B in Eqs. (5.12a)–(5.12b).

(c) Let sn be the control parameter for the onset of the period-2n

cycle. Analogous to how Eq. (5.16) was obtained, derive the followingrecursion from Eq. (5.14a)

sn =

√6 + 4sn−1 − 2

4. (5.8)

(d) Compute sn, and hence rn, from Eq. (5.8), say for n = 3 − N .Together with the known values of r1 and r2 previously, calculate thefirst N−1 values of Feigenbaum’s δ number from Eq. (A:5.18). Whatis the smallest N such that the δ number has converged to within 1%?Discuss your results.

Page 34: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

26 Chapter 5. Nonlinear dynamics and chaos

(e) Obtain algebraically the limit r∞ and compare with the exactvalue 0.8924...

P5.2 We can numerically explore self-similarity and renormalization tech-niques as follows.

(a) Generate a simplified bifurcation diagram like Figure A:5.8 bymodifying Program A:5.2. Determine graphically, by direct readoutfrom the figure using on-screen Matplotlib tools like zoom etc., theparameter values r∗n and displacements dn for n = 1 − 4 to at leastthree significant digits.

(b) Use the r∗n values thus obtained, numerically compute the corre-sponding displacements, denoted by d′n, from Eq. (A:5.21). Compared′n and dn, and discuss your results. Calculate Feigenbaum’s α num-bers.

(c)∗ Vertical self-similarity can be studied with the following renor-malization technique as described in a readable paper [7]. The ideais that starting from x = 1

2, the sequence of points of the period-2n+1

cycle – after being compressed horizontally, and inverted and scaledby α vertically – should look the same as the previous 2n-cycle whensuperimposed on the same graph, as in Figure 5.5.

0 5 10 15 20n

0.3

0.4

0.5

0.6

0.7

0.8

0.9

xn

Figure 5.5: The times series of the 2n-cycle () and 2n+1-cycle (4). The latter is compressed horizontally, and in-verted and scaled by α vertically.

The specific steps are:

Page 35: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

5.2. Exercises and Projects 27

• Starting from x = 12, iterate the period-2n cycle, f (2n)(1

2) at r∗n,

for N steps. Record every point and plot them.

• Repeat the calculation for the period-2n+1 cycle at r∗n+1 for 2Nsteps, but only keep every other point starting with the first. Letus call this data set X . This amounts to a compressed time scaleby a factor of 2.

• Invert and scale X about the x = 12horizontal line by the factor

α calculated above. This can be done by X ′i = α ∗ (Xi − 1

2) + 1

2

for each element Xi. Plot X ′ on the same graph. You shouldproduce a graph like Figure 5.5. Do the two sequences matchup? If not, adjust α until they do. What is the α value?

(d) The constant B in Eq. (5.12b) plays the role of the α-numberbecause they both represent rescaling in the vertical direction. Usethe sn obtained from Project P5.1 for larger n including s∞, estimatethe value of α discussed in the text. Comment on the results.

P5.3 The evolution of the phase space diagram for the kicked rotor showsa strong dependence on κ, the kick strength. Study the change ofthe diagram from regular islands at small κ, to co-existence of regularislands and chaotic sea at intermediate κ, until the whole diagram isflooded by the chaotic sea at larger κ.

(a) Iterate the standard map and plot the trajectories at k = 0from initial conditions as dots (not lines), (θ0, ω0) = (0, iπ/N), i =0, 1, 2, ..., N . Choose N = 20− 40. Observed the pattern of dots, andexplain.

(b) Do the same for κ = 0.2, 1, 2, 5. Determine the value of κwhere chaotic motion first appears. How can we be sure the motion ischaotic? What is κ when there are no surviving islands and no emptyvoid? Note: when κ is large, ωn can become unbound. Make sure tore-map both θn and ωn to the range [−π, π](c)∗ Compute the correlation dimension (A:5.41) of the standard mapin the chaotic region, at κ = 5, 10, 20. Average your results overboxes scattered in the phase space and over initial conditions in eachcase. Interpret your results.

Page 36: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

28 Chapter 5. Nonlinear dynamics and chaos

P5.4∗ In this project we investigate the stadium billiard problem. Assumethe radius r = 1 in some arbitrary units, and set the center of thestadium as the origin.

(a) Write a program that simulates the stadium billiard and animatesthe motion of the particle using VPython. Modularize your code suchthat integration, collision detection, and reflection are separate tasks,and tested independently. Animation not only provides visualizationof motion, but can help us greatly to find and squash bugs in theprogram with visual cues in this case. Start with a simple collisiondetection scheme and move to a more sophisticated scheme after theprogram is working.

(b) Plot the trajectories for several d-values, e.g., d = 0, 0.01, 0.1,1.0, 5.0, etc. Comment on your results. What happens if the particlestarts from the origin?

(c) Pick a case from above that you find interesting, say d = 0.1.Generate Poincare maps from several initial conditions (a dozen orso), e.g., start with the particle at equi-distance from top to bottomat midfield, with the same velocity. Discuss your results.

(d) Compute the Lyapunov exponent for several d-values from smallto large. What is the dependence on d? What do you conclude?

(e)∗ Investigate any aspect you find interesting in the exploration ofthe stadium billiard. For example, given an initial condition, howresistant is the forbidden zone against encroachment as d is varied?

5.A Renormalization and self-similarity

To understand that the universality exhibited in the logistic map is rathergeneral and not a quirk limited to this map, we present a worked exampleusing renormalization theory [35]. In the small neighborhood of a bifur-cation, the lowest order of nonlinearity is second order, so we can use thesecond-order logistic map without loss of generality. We show only key stepsand leave verification and details to Project P5.1.

Let us zoom in to the first bifurcation structure (beginning of period 2,Figure A:5.5) at r1 = 3/4, x∗ = 1 − 1/4r1, in a small neighborhood r =r1 + ∆r and x = x∗ + ∆x. We expand the map function in powers of ∆x

Page 37: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

5.A. Renormalization and self-similarity 29

to obtain

f = A− (1 + 4∆r(2x∗ − 1))∆x− 4r∆x2, (5.9)

where A is a constant independent of ∆x. Let us drop the constant A andset y = −4r∆x, equivalent to a shift and rescaling, we obtain a renormalizedmap function

f(y) = −(1 + 4s)y + y2, (5.10)

where s = ∆r(2x∗ − 1) is the new control parameter. Note that the firstbifurcation occurs at s = 0 where the derivative f ′(0) is equal to −1.

From the self-similarity of the bifurcation diagram discussed above, weexpect that Eq. (5.10) is the universal map function in the sense that it isthe prototypical expansion of a nonlinear system in the immediate neighbor-hood of a bifurcation. Our results obtained from Eq. (5.10) will be approx-imate because we have retained terms up to second order only. However,they should be qualitatively correct since the second-order nonlinearity,valid for any nonlinear system in the local neighborhood, is included.

With this expectation, we wish to apply this universal function to theperiod-4 bifurcation. We need to find the fixed points first, f (2)(y∗) = y∗,which turn out to be

y∗± = 2s± 2√

s(s+ 1). (5.11)

The other two fixed points 0 and 2 + 4s from period 2 are of no interest tous.

But where does the transition to period 4 occur? The answer must lie inthe control parameter of the universal map function in the neighborhood ofthe transition. Therefore, we want to expand the period-4 function f (2)(y)in the neighborhood of y∗.

Let y = y∗+ + z, and expand f (2)(y∗+ + z) in a Taylor series. The resultsare

f (2)(y∗+ + z) = y∗+ + (1− 16s− 16s2)z +Bz2 + ... (5.12a)

B = 16s2 + 16s− 12√

s(s+ 1). (5.12b)

We again apply the renormalization techniques of shifting and rescalingto Eq. (5.12a) by dropping y∗+ and setting z = Bz. We have our newrenormalized map function at the period-4 bifurcation as

g(z) = (1− 16s− 16s2)z + z2. (5.13)

Page 38: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

30 Chapter 5. Nonlinear dynamics and chaos

This looks nearly identical to Eq. (5.10) except for the coefficient in frontof the linear term. To fix it, we set −(1 + 4t) = 1− 16s− 16s2, and rewrite(5.13) as

t =8s2 + 8s− 1

2, (5.14a)

g(z) = −(1 + 4t)z + z2. (5.14b)

Through renormalization, the new map function g(z) has the same formas f(y) in (5.10). This shows that near the period-4 bifurcation, the mapiterates are given by

zn+1 = −(1 + 4t)zn + z2n, (5.15)

the same as near the period-2 bifurcation. It confirms the universality ofself-similarity and its infinite replicability at finer and finer scales.

Equation (5.15) means that this bifurcation starts at t = 0, just as theprevious one started at s = 0. Solve for (the new) s by setting t = 0 inEq. (5.14a), and we obtain

s =

√6− 2

4. (5.16)

Because the origin of s has been shifted to r1, we need to add it to s toobtain the control parameter for period-4 bifurcation

r2 = r1 + s =3

4+

√6− 2

4=

√6 + 1

4= 0.862372... (5.17)

This is the exact result for the beginning of period-4 cycle.Of course, we need not stop here. We can solve the next value r3 by

setting t = s on the LHS of (5.14a), and so on. This way, we can obtainthe approximate rn values including the limit r∞ = 0.890388, as well as theδ number. Further investigation is left to Project P5.1.

5.B Fast Fourier transform (FFT)

5.B.1 Discrete Fourier transform

Let f(t) be a periodic function in time with period T . Divide the periodinto N equidistant intervals, and sample the first N data points such that

fk ≡ f(tk), tk = k∆, ∆ = T/N, k = 0, 1, ..., N − 1. (5.18)

Page 39: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

5.B. Fast Fourier transform (FFT) 31

Let us define the discrete Fourier transform (DFT) as

gm ≡ g(ωm) =N−1∑

k=0

f(tk) exp(−iωmtk) =N−1∑

k=0

fk exp(−iωmk∆/T ), (5.19)

where ωm denotes the angular frequency components. Below, we drop theword “angular”, and frequency is understood to mean angular frequency.Introducing the basic unit of frequency as 2π/T , ωm can be written as

ωm = 2πm/T, m = 0, 1, ..., N − 1. (5.20)

Note that at this point, the smallest frequency is ω0 = 0 and the largestωN−1 = 2π(N − 1)/T = 2π(N − 1)/N∆ ∼ 2π/∆ for large N . Later we willsee that we can also interpret the frequency as having positive and negativevalues. We also see from Eqs. (5.18) and (5.20) that the time and frequencyintervals are related reciprocally as

δt = ∆ =2π

Ω=

Nδω, δω =

T=

N∆, (5.21)

where Ω = 2π/∆ = 2πN/T = Nδω.

Now, the Fourier transform gm can be written as

gm =N−1∑

k=0

fk exp(−i2πmk∆/T ) =N−1∑

k=0

fk exp(−2πimk/N), m = 0, ..., N−1,

(5.22)where we have used the fact ∆/T = 1/N .

Let us look at some properties of gm. First, if there is only one datapoint, N = 1, the Fourier transform g0 and the data point f0 are identical,

g0 = f0, if N = 1. (5.23)

Second, gm is periodic with period N , i.e.,

gm = gm+N . (periodicity) (5.24)

This comes from the fact that, if m = nN where n is some integer, then theexponential factor in Eq. (5.22) will be of the form exp(−2πink) = 1. This

Page 40: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

32 Chapter 5. Nonlinear dynamics and chaos

periodic property will come in handy in FFT later on. Third, gm and fmare uniquely related to each other due to the following orthogonal relation,

N−1∑

n=0

exp(−2πimn/N) exp(2πink/N) = Nδmk. (5.25)

Here δmk denotes the Kronecker delta function. We leave the proof to the in-terested reader as an exercise. By multiplying Eq. (5.22) by exp(2πimn/N)on both sides, and summing over m using Eq. (5.25), we can obtain fk as

fk =1

N

N−1∑

m=0

gm exp(2πimk/N). (5.26)

Equations (5.22) and (5.26) form a reciprocal relationship. The set ofdata points fk uniquely determines gm and vice versa. The relation-ship is asymmetric in our convention with respect to the factor 1/N . Otherconventions include swapping of this factor to gm, or the symmetric place-ment of 1/

√N to both. Readers familiar with quantum mechanics will

recognize that Eq. (5.22) is similar to the coordinate and momentum spacerepresentations of the wave function (see Eqs. (A:8.33a) and (A:8.33b)).

As an example, let us work out the DFT of the simple cosine functionwith the following parameters,

f(t) = cos(t), period T = 2π, N = 1, 2, 4, ∆ = T/N. (5.27)

There is only one frequency in f(t): ω = 1. The following table gives thesampling times and data points for N = 1, 2, 4.

N 1 2 4∆ 2π π π/2k 0 0 1 0 1 2 3tk 0 0 π 0 π/2 π 3π/2fk 1 1 -1 1 0 -1 0

The corresponding Fourier transform gm can be readily calculated fromEq. (5.22). For example, if N = 2,

gm = f0 exp(−2πm · 0/2) + f1 exp(−2πim · 1/2) = 1− (−1)m. (5.28)

Incidentally, the above expression is valid for N = 4 as well. Values of gmare tabulated below.

Page 41: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

5.B. Fast Fourier transform (FFT) 33

N 1 2 4m 0 0 1 0 1 2 3ωm 0 0 1 0 1 2 3gm 1 0 2 0 2 0 2

For N = 1, there is only one frequency component g0 = 1, at ω0 = 0. Itis completely wrong, of course, as we cannot hope to get any meaningfulvariation of the function from a single data point. For N = 2, things look alittle better, because there is only the component g1 = 2 at ω1 = 1, whichcoincides exactly with the frequency present in our original function cos(t),and no spurious frequencies (granted, there are only two frequencies here).As you may suspect already, this is just a coincidence.

Increasing N further to 4, we see two frequency components show upin gm, ω1 = 1 and ω3 = 3, with equal magnitude g1 = g3 = 2. The firstfrequency, ω1 = 1, is as expected. The second one at ω3 = 3, however,appears to be unwarranted. Things are not getting worse, though. Theappearance of ω3 is due to the fact that with 4 points, both cos(t) andcos(3t) are identical at the sample times tk, so the DFT is just mimickingthe behavior of the latter, which is appropriately called aliasing. IncreasingN and recasting the results can help remove aliasing. We will return to thispoint shortly. But next, let us discuss a very efficient method of computinggm when N is large, the FFT method.

5.B.2 FFT method

The direct approach to DFT as given by Eq. (5.22) is straightforward toimplement. The program would consist of two nested loops, the inner looprunning over k and the outer one over m. For each component gm, thesummation over k requires N operations (multiplications). To compute allN components, N2 operations are needed. For large N , it is increasinglyinefficient and slow.

A fast algorithm had been discovered that can compute the Fouriertransform in N lnN operations [6], just a little above linear scaling. It isthe FFTmethod we describe below. With the FFT method, one can achievelarge gains in speed. It is one of the few truly significant algorithms discov-ered in the digital age that has had a tremendous impact on computationalphysics and nearly every field of scientific computing.

Page 42: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

34 Chapter 5. Nonlinear dynamics and chaos

Recursive FFT

The simplest FFT algorithm assumes that N is a power of two, i.e., N = 2L

where L is an integer. The basic idea invokes the tried-and-true paradigmthat has been used successfully in many a problem: divide and conquer.The general roadmap is as follows. We take the original problem of size N ,divide it into two problems of size N/2 each, again into four problems ofsize N/4, and so on, until the problem is reduced to size 1. What is theFourier transform of one data point? It is just the function value itself,according to the properties of DFT (5.23). We then fold the problemsback up, combining single data points into a pair, then a pair of pairs, etc.Along the way, the periodic condition (5.24) is used repeatedly, which isthe essential element for the speed gain in FFT. When the problem is sizeN again, it will be completely solved.

Let us now walk through the details. To divide the problem of size N ,we begin from Eq. (5.22), dividing the sum into even and odd terms as

gm =

N−2∑

k=0,2,4

fk exp(−i2πmk/N) +

N−1∑

k=1,3,5

fk exp(−i2πmk/N). (5.29)

Making index substitutions as k → 2k in the even sum, and k → 2k + 1 inthe odd sum, we obtain two sums of size N/2,

gm =

N/2−1∑

k=0

f2k exp(−i2πm2k/N) +

N/2−1∑

k=0

f2k+1 exp(−i2πm(2k + 1)/N).

The sizes of sums are correct, N/2 each, but the exponentials need to bechanged to make them into a form of Fourier transform of size N/2. Thismay be done by setting 2k/N = k/(N/2), and pulling the k-independentexponential factor in the second term out in front. It leads to

gm,0 =

N/2−1∑

k=0

f2k exp[−i2πmk/(N/2)],

gm,1 =

N/2−1∑

k=0

f2k+1 exp[−i2πmk/(N/2)],

gm = gm,0 + wmN gm,1, w

mN = e−i2πm/N . (5.30)

Page 43: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

5.B. Fast Fourier transform (FFT) 35

The factor wmN is sometimes called the twiddle factor. Both sums, gm,0 and

gm,1, the even and odd terms, respectively, are now in the correct form ofFourier transform of size N/2. Furthermore, by Eq. (5.24), they are periodicin m with period N/2, i.e.,

gm+N/2,0 = gm,0, gm+N/2,1 = gm,1. (5.31)

Also, the twiddle factor wmN transforms as

wm+N/2N = e−i2π(m+N/2)/N = e−iπe−i2πm/N = −wm

N . (5.32)

Therefore, it is understood that the effective range of m in Eq. (5.30) is0 ≤ m ≤ N/2− 1, same as the new problem size.

If we were to calculate them directly, no speedup would be gained. Butif gm,0 and gm,1 are known for 0 ≤ m ≤ N/2 − 1, hence the first half ofgm also known, the other half can be calculated by the periodic condition(5.31) and (5.32) as

gm+N/2 = gm+N/2,0+wm+N/2N gm+N/2,1 = gm,0−wm

N gm,1, N/2 ≤ m ≤ N − 1.(5.33)

We arrive at the following conclusion: if the two subdivisions of size N/2are solved, they can be combined to solve the original problem of size Nwithout re-summing. This is the source of computational cost saving. Thisprocess can be applied repeatedly to problems of even size. Equations (5.30)and (5.33) are straightforward to implement using recursive programming(See Program 5.1). It requires a separate storage array. The input array isunchanged on return. We need to go no further if we just want a recursiveFFT.

Butterfly operations

If we want an iterative FFT with in-place swapping without extra storage,we need to consider the FFT tree. At each stage of the sub-division, thepair of equations (5.30) and (5.33) form a relationship known as a butterfly,depicted in Figure 5.6.

Page 44: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

36 Chapter 5. Nonlinear dynamics and chaos

gm,0

gm,1

gm = gm,0 + wmN gm,1

gm+d = gm,0 − wmN gm,1

wmN

−wmN

1

1

Figure 5.6: The butterfly diagram at an FFT stage. The arrows indicate themixing of terms with the labeled weights. The parameter d is the distancebetween the butterfly pair at the stage (equal to the period as well, seetext).

We need not stop at N/2, of course. Each problem of size N/2 can befurther subdivided into two size-N/4 problems from Eq. (5.30) as

gm,00 =

N/4−1∑

k=0

f4k exp[−i2πmk/(N/4)],

gm,01 =

N/4−1∑

k=0

f4k+2 exp[−i2πmk/(N/4)],

gm,0 = gm,00 + wmN/2 gm,01. (5.34)

Similarly, we have for gm,1,

gm,1 = gm,10 + wmN/2 gm,11. (5.35)

We see the developing pattern: in each subdivision, the size is halved, a0 or 1 is appended to the parent subscript to denote the even or odd indices(k), respectively.

As before, the butterfly operations (5.33) can be used to obtain gm+N/4,0

and gm+N/4,1 without new summations. Note that each time the problemsize is halved, so is the period. After L stages, the effective range of mbecomes 0 ≤ m ≤ N/2L.

Since N is a power of two, we can keep subdividing the problems untilwe reach size one. Then what? As discussed earlier, the Fourier transform

Page 45: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

5.B. Fast Fourier transform (FFT) 37

of size one is the data point itself, Eq. (5.23). But what are the locationsof the data points in terms of their initial indices? As can be seen fromEqs. (5.30) and (5.34), each subdivision shuffles the indices depending onwhether they are even or odd at that stage.

The FFT tree

We can figure out the locations by recursively separating the even and oddindices. Figure 5.7 illustrates the process for N = 8. Starting with theinitial array of size N , even indices are collated into the first half of thearray, and the odd indices to the second half. The process is repeatedfor the new subarrays (and new indices, not the original ones), until eachsubarray is size one. The final locations are the bit-reversed initial indices.

0

2

1

3

4

6

5

7

binary

0

4

2

6

1

5

3

7

0

2

4

6

1

3

5

7

000

100

010

110

001

101

011

111

000

001

010

011

100

101

110

111

rev. binary

Figure 5.7: The shuffling of data points in successive stages of halving theproblem size. The binary column refers to the binary patterns of the initialindices. The reverse binary column contains the final indices which are thebit reversal of initial indices.

Including the wmN factor and following Eqs. (5.30) and (5.35), we can

explicitly write the process out for three stages as,

gm = gm,0 + wmN gm,1

= gm,00 + wmN/2 gm,01 + wm

N (gm,10 + wmN/2 gm,11)

= gm,000 + wmN/4 gm,001 + wm

N/2(gm,010 + wmN/4 gm,011)

+ wmN

[gm,100 + wm

N/4 gm,101 + wmN/2 (gm,110 + wm

N/4 gm,111)]. (5.36)

Page 46: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

38 Chapter 5. Nonlinear dynamics and chaos

It is graphically represented as a tree in Figure 5.8.

w

g w

g gw w

g g g gw w w w @@ @@ @@ @@

HHHH

HHHH

XXXXXXXX

g

g0 g1

g00 g01 g10 g11

g000 g001 g010 g011 g100 g101 g110 g111

Figure 5.8: The FFT tree. Each time the tree branches to the right (darkcircles), a twiddle factor is multiplied to that branch.

We can see that the initial indices are the bit patterns of the subscripts(g...) formed by appending either ‘0’ or ‘1’ at each stage. When bit-reversed,they give the Fourier transform g... at the level. At the last level, the termsgxyz are just the data points according to the following mapping (N = 8):

g000 g001 g010 g011 g100 g101 g110 g111f0 f4 f2 f6 f1 f5 f3 f7

We can verify that substitution of gxyz by their fk values into Eq. (5.36)yields the correct Fourier transform for N = 8. Note that the effectiverange of m in Eq. (5.36) decreases by a factor of 2 after each stage, so atthe last stage gm,xyz = g0,xyz.

Iterative FFT

Of course, in actual numerical computation we do not explicitly sum theterms in Eq. (5.36), for no saving is to be had that way. Instead, westart from the last stage (deepest level), and fold the problems back upby combining appropriate butterfly pairs. If we start with bit-reversedinput, the output will then be in correct order at stage 0, as illustrated inFigure 5.9.

At the start (Figure 5.9 (top), stage 3 for N = 8), neighboring butterflypairs are adjacent to each other, all in one group. The problem size is1, and the Fourier transforms are just the data points. The period inthe butterfly operations is also 1, so g0,xy and g1,xy are generated. In thenext level up, most parameters are doubled, including the distance between

Page 47: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

5.B. Fast Fourier transform (FFT) 39

g0,000

stagegroupspairs/grpprob. size

3141

2222

1414

08

g0,001

g0,010

g0,011

g0,100

g0,101

g0,110

g0,111

g0,00

g1,00

g0,01

g1,01

g0,10

g1,10

g0,11

g1,11

g0,0

g1,0

g2,0

g3,0

g0,1

g1,1

g2,1

g3,1

g0

g1

g2

g3

g4

g5

g6

g7

08

g0,00 = f0

stagegroupspairs/groupproblem size

2121

140

2

g0,0 = g0,00 + g0,01= f0 + f2

g0 = g0,0 + g0,1= f0+f1+f2+f3

g2 = g0,0 − g0,1

g0,01 = f2

g0,10 = f1

g0,11 = f3

g1,0 = g0,00 − g0,01= f0 − f2

g0,1 = g0,10 + g0,11

g1,1 = g0,10 − g0,11

= f1 + f3

= f1 − f3

= f0−f1+f2−f3

g1 = g1,0 − ig1,1

g3 = g1,0 + ig1,1

1 02 4

-i

i

= f0−if1−f2+if3

= f0+if1−f2−if3-1

-1

-1

Figure 5.9: Butterfly operations. Top: N = 8, without twiddle factors.Each pair is indicated by knotted arrows. Horizontal arrows are omitted.Bottom: N = 4, with twiddle factors wm

N , and the Fourier transform in theintermediate and final stages. If no twiddle factor is shown, it is 1.

the elements of a butterfly pair, problem size and period, and the gap tothe next butterfly pair for a given m. The exceptions are the group sizeand the number of pairs per group which are halved. When the distancebetween elements of a butterfly pair exceeds the group size, the operationsswitch from intragroup to intergroup. At each level, there are N/2 butterflyoperations, and they involve only pair-wise array elements, so they can be

Page 48: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

40 Chapter 5. Nonlinear dynamics and chaos

done in-place. When they are finished, the array is ready for butterflyoperations in the next level. After traversing lnN levels, the problem isdone, and the array is in the correct order of frequencies.

A more detailed illustration is shown in Figure 5.9 (bottom) for N = 4.Each operation includes the twiddle factor wm

M and the intermediate results.You can verify that the first, second, and the third columns contain thecorrect Fourier transform for N = 1, 2, 4, respectively.

If N = 2L, the number of arithmetic operations required is (N/2)LC =N log2NC/2. Here, C = 3 is the number of floating point operations in abutterfly operation, which by Figure 5.6, is C = 3, one multiplication plustwo additions. So the total number of operations is 3N log2N/2.

The iterative FFT algorithm is also shown in Program 5.1, along withthe two required subroutines for doing bit reversal of integers and arrays. Itdoes the transform in-place, so save a copy of input if you need the original.

5.B.3 Positive and negative frequencies, aliasing

Let us continue the earlier example of Eq. (5.27), but this time use Pro-gram 5.1 imported as fft to compute the Fourier transform of cos(t). Thiscan be done with the following code snippet.

T, L = 2∗np.pi, 3N = 2∗∗Lt = np.arange(N)∗(T/N) # N data pointsf = np.cos(t) + 0∗1j # make f complexgm = fft. fft (f , L)

The results (|gm|) for N = 4 and 8 are shown in Figure 5.10. Wealready know the results for N = 4 from hand computation earlier, andwe get the same results with FFT, of course. We attributed the ω = 3component as due to aliasing. The odd thing is that though doubling Nto 8 does eliminate that alias, but a new one at ω = 7 appears. Themagnitudes are also different (2 vs. 4), but this is no cause for concern, aswe are using unnormalized Fourier transform, and are interested only inthe relative distribution. In fact, if we keep doubling N , there is alwaysthat second component at ω = N − 1, which is shifted to higher and highervalues.

This is contrary to what we expect. As N increases, we would expectthe sampling to be increasingly accurate. Evidently this is not the case,

Page 49: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

5.B. Fast Fourier transform (FFT) 41

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5m

0.0

0.5

1.0

1.5

2.0

g m

0 1 2 3 4 5 6 7 8m

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

g mFigure 5.10: The Fourier transform of cos(t) for N = 4 (left) and 8 (right).

it seems. It turns out that this is due to the mathematics of DFT withexponential functions. We can always write cosine as

2 cos(t) = exp(it) + exp(−it). (5.37)

So mathematically, the DFT of cos(t) should show two components, atω = ±1. Of course, physically, both ± frequencies are the same thing. Butwhy is the second component (alias) in Figure 5.27 not occuring at −1?The answer is: it should, and we need to interpret the results that way.

We proceed to break the sum in Eq. (5.26) into two as

Nfk =

N/2−1∑

m=0

gm exp(2πimk/N) +

N−1∑

m=N/2

gm exp(2πimk/N). (5.38)

In the second sum, we make a variable substitution, m→ m+N , to get

N−1∑

m=N/2

gm exp(2πimk/N) =−1∑

m=−N/2

gm+N exp(2πi(m+N)k/N)

=−1∑

m=−N/2

gm+N exp(2πimk/N). (5.39)

Put it back into Eq. (5.38), we have

fk =1

N

N/2−1∑

m=−N/2

gm exp(2πimk/N), gm =

gm (m ≥ 0),gm+N (m < 0).

(5.40)

Page 50: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

42 Chapter 5. Nonlinear dynamics and chaos

Equation (5.40) naturally lends to the interpretation that DFT has bothpositive and negative frequencies mathematically. Physically, there is nodifference in the power spectrum (|gm|2) regarding the sign of the frequency.So generally we double the power except at m = 0 and −N/2.1

The positive and negative frequencies in the results returned from theFFT code have the arrangement shown in Table 5.1.

Table 5.1: The order of frequencies.

m 0 1 2 ... N/2 − 1 −N/2 −N/2 + 1 ... -2 -1gm g0 g1 g2 ... gN/2−1 gN/2 gN/2+1 ... gN−2 gN−1

The first half of the array contains positive frequencies. The last arrayelement contains the first negative frequency m = −1, and the second lastelement m = −2, etc.

2.01.51.00.5 0.0 0.5 1.0m

0.0

0.5

1.0

1.5

2.0

g m

−4 −3 −2 −1 0 1 2 3

ωm

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

g m

Figure 5.11: The Fourier transform of cos(t) for N = 4 (left) and 8 (right)in positive and negative frequencies.

To put gm in the right order, we add one more line to the earlier code:

gm = np.concatenate((gm[N//2:], gm[:N//2])) # put in right order

1In quantum mechanics, the wave function in position space and in momentum spaceare also related by the Fourier transform. Physically, it is imperative to interpret themomentum as consisting of both positive and negative values if FFT is used, Eq. (A:8.34).

Page 51: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

5.B. Fast Fourier transform (FFT) 43

The NumPy concatenate function combines the two halves of gm so thesecond half comes before the first half. Now the array gm contains thefrequencies in the right order, from negative to positive frequencies in as-cending order. For N = 8, this corresponds to a change from

ωm = [0, 1, 2, 3, 4, 5, 6, 7], gm = [0, 4, 0, 0, 0, 0, 0, 4], (5.41)

to the correct order

ωm = [−4,−3,−2,−1, 0, 1, 2, 3], gm = [0, 0, 0, 4, 0, 4, 0, 0]. (5.42)

With ± frequencies, we can replot the data in Figure 5.10. It is shownin Figure 5.11. There is no longer any alias. We can see that the Fouriertransform of cos(t) is reproduced correctly for both N = 4 and 8 at ω = ±1.

5.B.4 Minimum sampling frequency

There is still the question we discussed earlier, that is: if we use N = 4,then cos(t) and cos(3t) produce exactly the same Fourier transform becausefk are the same at the sampling points. But if we used N = 8, the Fouriertransform of cos(3t) would be correctly reproduced, and there is no crosscontamination. Now the question is, how many sampling points are enough?And for a given N , what part of the spectrum can be trusted?

To answer these questions, we look to the sampling theorem. Let nmax

be the highest frequency in the source f(t). For example, nmax would be 1and 3 for cos(t) and cos(3t), respectively. Just like cos(t) is mathematicallywritten as a linear superposition of two frequencies ±1 in Eq. (5.37), f(t)can be adequately expressed as a sum of positive-and-negative frequencypairs

f(t) =nmax∑

m=−nmax

exp(2πimt/T ). (5.43)

Equation (5.43) would cover all harmonics from 0 to nmax, inclusive. Thisis true because exp(±2πimt/T ) are all linearly independent.

Comparing with Eq. (5.40), we see that the sampling frequency mustbe such that N/2 ≥ nmax, or N ≥ 2nmax to correctly reproduce the source.This is the sampling theorem. Put it another way: if we choose N samplingpoints, the Fourier components can be trusted only up to half the samplingfrequency,

ωtrust = 2π/T ∗ (N/2) = πN/T = π/∆. (5.44)

Page 52: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

44 Chapter 5. Nonlinear dynamics and chaos

This minimum sampling frequency is also known as the Nyquist fre-quency. In terms of quantum mechanics, it is just a statement about theHeisenberg’s uncertainty principle, which must be satisfied between fre-quency and time, as well as between position and momentum.

In summary, with both positive and negative frequencies, the Fouriertransform can be trusted for frequencies up to ±N/2 for a given N andwithout aliasing. For cos(t), N = 4 is sufficient, and for cos(3t), N = 8.

In practice the source is usually not as simple as a sinusoidal function,so it may be difficult to know the highest frequency a priori. In that case,one must have some educated guesses based on other information, such ason physical ground. And we must check to make sure that the samplingfrequency is sufficiently high that the desired part of the spectrum is stable.

5.C Program listings and descriptions

Program listing 5.1: Fast Fourier transform (FFT) (fft.py)

1 import math as ma, numpy as np # needed for constants e, π

3 def fft rec (f , L): # recursive FFT of 2∗∗L pts, f unchangedif (L==0): return f # length 1, g0 = f0

5

g0 = fft rec (f [::2], L−1) # even part [0, 2, 4, ...]7 g1 = fft rec (f [1::2], L−1) # odd part [1, 3, 5, ...]

9 N = 2∗∗Lg, M = [0.0]∗N, N//2

11 for m in range(M): # assemble two halvesw = ma.e∗∗(−2j∗ma.pi∗m/N)

13 g[m] = g0[m] + w ∗ g1[m] # 1st half Eq. (5.30)g[m+M] = g0[m] − w ∗ g1[m] # 2nd half Eq. (5.33)

15 return g

17 def ifft rec (f , L): # inverse FFTg = np.conjugate(f)

19 return np.conjugate(fft rec(g, L))/2∗∗L

21 def fft (f , L): # return FFT of 2∗∗L data points, f changed

Page 53: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

5.C. Program listings and descriptions 45

divs = 1 # number of divisions, initially = 123 pairs = 2∗∗(L−1) # number of pairs per division, initially =N/2

stride = divs # period, distance between pair−wise elements25 bit reverse array (f , L) # bit reverse array

for level in range(L): # iterate ln(N) levels27 gap = 2∗divs # distance to next pair

w = 1.0 # exp(−i m pi/divs), initial = 129 x = ma.e∗∗(−1j∗ma.pi/divs) # cumulative exp factor

for m in range(divs): # run over each division31 # combine butterflies , start with 1st item in each division

for i in range(m, pairs∗gap, gap):33 tmp = w ∗ f[i + stride]

f [ i + stride] = f[ i ] − tmp35 f [ i ] = f[ i ] + tmp

w = w∗x # update w37 divs = divs∗2 # subdivide the problem

pairs = pairs//239 stride = divs

return f41 # end fft()

43 # return the bit−reversed integer n in a bit field of ’width’def bit reverse (n, width):

45 bits = list (bin(n)) # convert to bits , e.g ., 2 −−> ’0b10’bits . reverse () # in−place reverse bits to ’01b0’

47 bits=’’.join(bits [:−2]) # back to string & discard last ’b0’ charspad = width − len(bits)

49 if (pad>0): # pad trailing ’0’, e.g ., ’10’ to ’1000’bits = bits + ’0’∗pad

51 return int(bits ,2) # binary to decimal

53 # bit reverse an array, mostly for array size of 2∗∗integer powerdef bit reverse array (a, width):

55 for i in range(len(a)):j = bit reverse( i , width)

57 if ( i < j): # swap only oncea[ i ], a[ j ] = a[j ], a[ i ]

Both the recursive and the iterative FFT functions assume the inputdata is an even power of 2, N = 2L. If it is not, either pad it with zeros or

Page 54: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

46 Chapter 5. Nonlinear dynamics and chaos

cut it off at the nearest boundary.The recursive function does not alter the input array. It copies the even

and odd elements of the array by slicing (line 6–7). The iterative functionfft(), on the other hand, overwrites the input array, which contains thetransform on return. Note, however, if the input array is not the built-inPython list, for instance a NumPy array, make sure it is an array of complextype, since the transform is complex, even though the input may be real.Otherwise, the imaginary parts are lost in type casting. If in doubt, use therecursive function, which is a bit slower due to overhead. When calculatingthe power spectrum, take the absolute magnitude of the transforms becausethey are generally complex.

On return, the first half of the array stores the positive frequencies,and the second half the negative frequencies, but in reverse order of themagnitude of the frequencies (Table 5.1). It can be reordered so it is fromnegative to positive frequencies in ascending order (see Section 5.B.3 andEqs. (5.41) and (5.42)).

Page 55: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Chapter 6

Oscillations and waves

We simulate catenary problems closely related to the displacement of astring under static forces discussed in Chapter A:6. We also discuss solu-tions of linear equations by Gauss elimination method in Section 6.A.

6.1 The hanging chain and the catenary

When a string is pulled down by its own weight, its shape is called a cate-nary. It is a common situation such as seen in the shape of power lines orropes between suspension points.

6.1.1 The catenary equation

Consider a free-hanging chain (or cable) suspended at the two ends. Theforce on the segment between x and x + ∆x (see Figure A:6.9) due togravity is ∆F = −ρg∆l, where ρ is the linear mass density, and ∆l thelength of the segment. For small ∆x→ 0, the length can be approximatedas ∆l ' ∆x/ cos θ, where θ is the angle of the tangent at x. The load (forceper unit length) is

f(x) =∆F

∆x= −ρg ∆l

∆x= −ρg/ cos θ. (6.1)

47

Page 56: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

48 Chapter 6. Oscillations and waves

Using the identity cos θ = 1/√1 + tan2 θ, the load becomes

f(x) = −ρg√1 + u′2. (6.2)

Substituting Eq. (6.2) into (A:6.29), we obtain the catenary equation

u′′ = α(1 + u′2

)1/2, α =

ρg

T. (catenary) (6.3)

For the catenary, the load is dependent on the shape of the string itself,u′(x). The equation is characterized by a single parameter α, which mea-sures the relative strength of the weight to the tension, higher α meanssmaller tension (deeper droop).

6.1.2 Self-consistent solutions

For constant α, Eq. (6.3) admits analytic solutions. However, we are inter-ested in the numerical solutions as a general approach in case the densityis not constant or there are external loads in addition to weight. BecauseEq. (6.3) is not a linear equation, it cannot be converted into a linear sys-tem of equations in either FDM or FEM. We need to modify our standardapproach. We take an iterative, self-consistent approach.

In self-consistent methods, we solve a given problem iteratively, startingfrom an initial guess. In each iteration, we use the previous iteration toobtain an improved solution. The process continues until solutions are self-consistent.

For our catenary problem, we can use the standard FDM or FEM ineach iteration. The algorithm is as follows, using FDM as the core method.

1. Solve Eq. (6.3) assuming u′i = 0 everywhere, i.e., Bi = αh2 in Eq. (A:6.36).Denote the solution u0.

2. Solve Eq. (6.3) again, but evaluate u′i using the last solution u0. Withthe three-point formula (A:6.31a), we have

u′i =u0i+1 − u0i−1

2h.

Prepare a new matrix B as Bi = αh2(1 + u′i2)1/2 (matrix A remains

the same), and obtain a new solution u1.

Page 57: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

6.1. The hanging chain and the catenary 49

3. Repeat step 2, replacing u0 by u1 when preparing matrix B. Continuethe process until the solutions un−1 and un are sufficiently close withinsome tolerance, say when the cumulative error is below a small value,∑

i |uni − un−1i | ≤ ε. We have obtained the correct, self-consistent

solution.

In principle, any initial guess satisfying the boundary condition can beused in step 1. To speed up convergence, we should start as close to thefinal solution as possible.

0.0 0.2 0.4 0.6 0.8 1.0x

−0.30

−0.25

−0.20

−0.15

−0.10

−0.05

0.00

0.05

u

iter. 1

2

3

exact

Figure 6.1: Self-consistent solutions of the catenary (α = 2).

The results from the self-consistent method are shown in Figure 6.1.The FDM method is used in each iteration (FEM could be used as well).For the initial guess (iteration 1), we used the solution from Figure A:6.11(T = 1, f = −1, corresponding to α = 1). We see that the solution in justthe next iteration quickly approached the true solution. By iteration 3, itis practically converged. The convergence is very rapid. If we had startedwith the initial guess for the actual α = 2, it would be near iteration 2, andwould be too close to observe the convergence process.

We note that the self-consistent method is a general approach, and canbe useful in other problems.

Page 58: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

50 Chapter 6. Oscillations and waves

6.1.3 Relaxation of a suspended chain

So far we have discussed solutions of the string in equilibrium, i.e., timeindependent solutions. We can also solve the problem by taking a time-dependent approach. This approach also provides us insight into the relax-ation process toward equilibrium.

To that end, we model a chain as an array of N particles (mass m)linked by springs (Figure A:6.22). Neighboring particles interact throughHooke’s force. Except for the particles at the ends, each particle has twoneighbors, one to the left and one to the right. But the particles are allowedto move in three-dimensions.

In a time-dependent process, we need a mechanism for energy to dissi-pate in order to settle down to equilibrium. We assume a linear dampingforce −b~v, the same as in the damped oscillator before (A:6.2). The netforce on particle i including neighboring interactions, damping, and gravityis

~Fi = ~fi,i−1 + ~fi,i+1 − b~v −mgj= −k(r− − l)r− − k(r+ − l)r+ − b~v −mgj, (6.4)

~r− = ~ri − ~ri−1, ~r+ = ~ri − ~ri+1,

where k, l are the same as before (A:6.7). The ~r∓ are the coordinates ofparticle i relative to particle i−1 (left) and particle i+1 (right), respectively.

The pair-wise forces are Newton’s third-law pairs, ~fi,j = −~fj,i. This fact

can be used in calculations to save computing time, because only ~fi,j needsto be computed.

We can now simulate the system of particles by solving the followingequations of motion,

d~ridt

= ~vi,d~vidt

=~Fi

m, i = 1, 2, 3, ..., N. (6.5)

Instead of solving Eq. (6.5) in component (xyz) form, it is more elegant,and efficient, to treat each ~ri and ~vi as a basic vector unit. The vectors aresimulated by NumPy arrays in Program 6.2. An added bonus is that wecan use our vectorized ODE solvers such as RK4 as is without modification.

Results obtained from Program 6.2 are shown in Figure 6.2. The shapeof the chain is graphed at different times. Initially, the particles are arrangedin a straight line, with all springs in the unstretched state. To make things

Page 59: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

6.1. The hanging chain and the catenary 51

x

0.00.5

1.01.5

2.0

t

02

46

8

z

−0.8

−0.6

−0.4

−0.2

0.0

Figure 6.2: Relaxation of a suspended chain (41-particle array).

a little more interesting, an impulse at t = 0 is given to a particle (the 13thparticle!). This is the reason for the ripples early on (t = 1 to 3).

As time increases, the effect of gravity becomes dominant. The chainis stretched as the valley deepens. The bottom of the valley actually overshoots, falling below the equilibrium position before rebounding. This isjust like an underdamped oscillator. Depending on damping, there maybe several oscillations before reaching equilibrium. On the other hand, ifdamping is large, the chain would behave like an overdamped oscillator,slowly approaching equilibrium from above, but never dipping below it.

Eventually, the chain reaches the equilibrium position, taking on theshape of a catenary shown in Figure 6.1. But there are several differences.We are representing a continuous string with a finite number (N) of particlesand segments. Increasing N but keeping mass density constant, our modelshould approach the catenary of a continuous string. The mass density inour model necessarily decreases toward the two ends where the tension ishigh. A larger elasticity would make the mass density more uniform, andmore exact representation of a true catenary.

Page 60: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

52 Chapter 6. Oscillations and waves

6.1.4 Oscillation of a slinky

If we turn the particle chain around and hang it vertically, the chain resem-bles a slinky (Figure 6.3), a system of coils that can stretch and contractvia elastic forces between them. We will refer to such an arrangement ofparticles as a slinky. If we treat each coil as a particle, we can use theparticle-chain model to simulate the oscillation and relaxation of a slinky.

Figure 6.3: Oscillation and relaxation of a suspended slinky.

The equations of motion of a slinky are the same as Eq. (6.5), so wecan use Program 6.2 to simulate the dynamics of the system. We onlyneed to start with different initial conditions and impose a slightly differentboundary condition.

However, we are interested in direct visualization of the motion of theslinky. To represent the slinky, we use the helix object again. Each coil inthe slinky is represented by one helix winding. The length is set equal to thedistance to its next neighbor. The whole slinky consists of a series of coilssituated at the positions of the particles. The stretching and contraction ofthe slinky can thus be dynamically depicted.

A slinky object is created by calling slinky() from our VPython mod-ules (VPM) library. We have defined several classes of objects in this library,

Page 61: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

6.2. Exercises and Projects 53

including lines (strings), slinkies, nets, meshes (surfaces). Animation of aslinky is accomplished by the slinky.move() method.

As Program 6.3 illustrates, before simulation begins, we create the slinkyobject (line 16). The supplied parameters include the positions of the coils,their orientation, the radius of the coils, and the thickness of the wire.When the changes of position during simulation are updated (line 21), theslinky springs to life, flexing and oscillating as illustrated in Figure 6.3. Thefour frames show the downward motion of the slinky made up of 21 parti-cles (coils). The top is held fixed. Depending on the damping parameter,the slinky executes underdamped or overdamped oscillation. Eventually,it reaches equilibrium. The top of the slinky is stretched more than thebottom because of the extra weight below.

Now the interesting question is: what happens to the slinky when it islet go? How will the top move? Will the bottom go up, down, or stay?Investigate these questions in Project P6.2.

6.2 Exercises and Projects

Exercises

E6.1 Simulate the waves and interference patterns formed by plucking astring. Model the string as a 1D array of particles (Figure A:6.22)as in Program 6.2, but turn off gravity and damping. Animate thewave using the VPM line object (see Program A:6.4). Figure 6.4shows sample waves after plucking the middle of the string. Adjustthe spring constant and relaxed length, and the mass of the particlesto achieve best results. Describe how the ripples may be attributedto interference effects.

After the simulation is working, play around with nonzero damping,and plucking different points with different amplitudes. Summarizethe effect of changing each parameter, as well as what you learneddesigning the program, bugs you encountered, etc.

Projects

P6.1 Study the catenary problem by self-consistent methods discussed inSection 6.1.2.

Page 62: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

54 Chapter 6. Oscillations and waves

Figure 6.4: Plucking a string.

(a) Implement a self-consistent program using either FDM or FEMas the core method. Do both if this is a team project. Calculate thecatenary for several values of α, e.g., 0.5, 1, and 2. Discuss conver-gence.

(b)∗ Once your self-consistent program is working, add a linear shearforce to Eq. (6.3), so the modified catenary equation is

u′′ = α(1 + u′2)1/2 + bx. (6.6)

Solve it for b = α = 1. Discuss your results. Predict what wouldhappen if b < α and b > α. Carry out the calculation for b = 1, α = 2and b = 2, α = 1. Discuss your results.

P6.2 Let us study the oscillation of a slinky in some depth.

(a) Start with Program 6.3, copy force() and chain() from Pro-gram 6.2. Change the boundary condition in chain() (line 18) sothat the velocity and acceleration of the first coil (particle 0) are keptat zero. Put this boundary condition under an if not bfree con-dition, where bfree is initially set to False in the main code. Runthe modified code. The slinky should oscillate and settle down inequilibrium position, as shown in Figure 6.3.

(b) Next, simulate the motion of the slinky if it is let go from equilib-rium. This can be done by removing the boundary condition, so theslinky goes in free fall.

Page 63: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

6.A. Gauss elimination and related methods 55

Prediction 1: Make a prediction about how the slinky will move afterit is let go. Will the bottom go up, down, or stay shortly after release?Write down your prediction with a brief explanation.

Duplicate the main loop in the code, and set bfree=True before thesecond loop. So there are two loops in your program: The first onerelaxes the slinky into equilibrium position with it held in place; thesecond one simulates the slinky’s motion after it is released.

Observe the motion. Did you predict correctly? Why?

(c) If your prediction was correct, great, here is another challenge. Ifnot, don’t despair, here is the chance for redemption.

Prediction 2: Predict the acceleration, in units of g, of the first (top)and the last (bottom) particles, right after release. Again, write itdown with a brief rationale.

Let us examine this question quantitatively. Modify the last loop sothe velocity and acceleration of the two particles are stored in arraysto be plotted as a function of time at the end of the loop. Velocity iscontained in v[], but you must obtain acceleration by calling chain()in your loop.

Add statements after the loop to plot the results with Matplotlib.Explain your results.

How did your predictions turn out? If you were correct both times(honestly), congratulations! You win a real slinky from the instructor(or extra credit).

6.A Gauss elimination and related methods

Consider the linear equations

a11x1 + a12x2 + a13x3 = b1,

a21x1 + a22x2 + a23x3 = b2, (6.7)

a31x1 + a32x2 + a33x3 = b3.

The Gauss elimination method works to eliminate one variable at a time,starting from the x1 in row 2. We can multiply row 1 by a21/a11 on bothsides and subtract it from row 2. This eliminates x1 from row 2. We repeat

Page 64: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

56 Chapter 6. Oscillations and waves

the process until the last row, which will have one variable left. We thenback substitute the solved variables up the rows, and solve the rest. Thefollowing example illustrates the method.

Let us assume the following matrices

a =

3 2 12 5 21 4 1

, b =

101812

. (6.8)

The roots to Eq. (6.7) are [x1, x2, x3] = [1, 2, 3]. To carry out the Gaussmethod, first subtract row 1 from row 2 and 3, then subtract row 2 fromrow 3 as

subtract:23× row 1

13× row 1

3 2 1

2 5 2

1 4 1

,

10

18

12

,

1011× row 2

3 2 1

0 113

43

0 103

23

,

10343263

,

3 2 1

0 113

43

0 0 −1833

,

10343

−5433

.

Now the matrix is in an upper-triangular form, and the last row hasonly x3, which is solved to give x3 = 3. Back substitute x3 into row 2, weobtained x2 = 2. Finally, substituting both x2 and x3 into row 1 yieldsx1 = 1, completing the process.

The Gauss-Jordan method is a slight variation of the standard Gausselimination method. It eliminates a given variable from all rows above andbelow, instead of only rows below in the latter. In the end, only diagonalelements remain (identity matrix). It is therefore very useful for finding theinverse of a matrix.

Another method bearing the same name is the Gauss-Seidel method.Rather than variable elimination, it uses an iterative process to achieve con-vergence. The matrix is separated into lower- and upper-triangular forms.

Page 65: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

6.B. Program listings and descriptions 57

The former operates on the solutions of iteration n, from which the lat-ter generates the solutions of iteration n + 1, or vice versa. Starting withan initial guess, the process is repeated until, hopefully, the solutions areconverged. Additionally, an acceleration factor can be used to mix old andnew values to improve convergence. However, convergence is not guaran-teed unless the matrix is well conditioned (positive definite for instance).But when it does converge, the process is very fast for large systems.

All three methods are implemented in Program 6.1.

6.B Program listings and descriptions

Program listing 6.1: Gauss elimination (gauss elim.py)

import numpy as np2

def gauss elim(a, b): # Gauss elimination, input altered4 n, x = len(b), np.zeros(len(b))

for i in range (n−1):6 j = np.argmax(abs(a[i:,i ])) +i # find largest pivot in column i

if ( j != i ): # swap rows i and j8 a [[ i , j ], i :] = a[[ j , i ], i :]

b[ i ], b[ j ] = b[j ], b[ i ]10 if (a[ i , i ] != 0.0):

f = a[i+1:, i ]/a[ i , i ] # zero column i of rows i+1 to n12 a[ i+1:, i+1:] −= np.outer(f, a[i , i+1:]) # fj ⊗ ai,j

b[ i+1:] −= f∗b[i]14 else:

return None # no unique solution16 for i in range(n−1,−1,−1): # back substituition , use newest x i

x[ i ] = (b[i ] − np.dot(a[i, i+1:], x[ i+1:]))/a[ i , i ]18 return x

20 def gauss jordan(a, b): # Gauss−Jordan elimination, input alteredn = len(b)

22 order, rows = [0]∗n, range(n) # rows yet unprocessedfor i in range(n):

24 j = rows[np.argmax( abs(a[rows,i]) )] # find max pivot in column iorder[ i]=j # order of roots

Page 66: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

58 Chapter 6. Oscillations and waves

26 rows.remove(j) # remove row j as doneif (a[ j , i ] !=0.0):

28 k = np.arange(n) != j # eliminate x j from all other rowsf = a[k, i ]/a[ j , i ]

30 a[k, i+1:] −= np.outer(f, a[j , i+1:])b[k] −= f∗b[j]

32 else:return None # no unique solution

34 return b[order]/a[order, range(n)]

36 def gauss seidel (a, b, x, abserr = 1.e−10): # Gauss−Seidel methodn, imax = len(b), 1000 # max number of iterations

38 w, rng = 0.5, range(n) # accelarated GS scheme, −1<w<1w1, backward = 1.0−w, False # try True if no convergence

40

if (backward): rng = rng[::−1] # default: forward42 for k in range(imax):

err = 0.044 for i in rng:

tmp = x[i]46 sum = b[i] − np.dot(a[i ,:], x) + a[i , i ]∗x[ i ]

x[ i ] = w∗x[i] + w1∗sum/a[i,i]48 err += abs(x[i]−tmp)

if (err < abserr): break

50 return err

The programs expect input matrices as floating point NumPy arrays.Both gauss elim and gauss jordan work similarly to each other. Ingauss elim, for example, the first loop iterates down the rows. At row i, wefind the largest element in column i of all rows below, max(|ai,i|, ..., |an−1,i|),using np.argmax which returns the index of the maximum element (line 6).The row containing this element is swapped with row i. This makes the sub-tractions below numerically more stable. Now, we obtain the scale factorsf for all rows below in order to subtract row i from these rows. An outerproduct c is formed with np.outer so that cj,k = fjai,k, j, k = i+1, ..., n−1(line 12, also see Program A:4.4). We can then subtract c from all rows andcolumns below at once with vectorized element-wise operations, effectivelyzeroing out column i of these rows.

The function returns the solution in a 1D array. It works in-place and

Page 67: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

6.B. Program listings and descriptions 59

the input arrays are altered upon return.The function gauss seidel, however, preserves the input arrays. It re-

quires an initial guess, which contains the correct solution on return if itconverged. It also uses an over-relaxation parameter to accelerate conver-gence (see Section A:7.2.2). Unlike the other two routines, gauss seidel

is not always stable and may not converge at all unless the matrices areoptimally conditioned (e.g., it fails with the input (6.8)). Sometimes, ifconvergence is slow in forward iteration, changing it to backward may help,vice versa. So care must be taken when using gauss seidel. The returnederror should be examined to make sure the process has converged.

Program listing 6.2: Relaxation of a suspended chain (relax.py)

1 import matplotlib.pyplot as pltfrom mpl toolkits.mplot3d import Axes3D

3 import numpy as np, ode

5 def force(r ): # force of particle pair , with relative coord rs = np.sqrt(np.sum(r∗r, axis=−1)) # distance

7 s3 = np.column stack((s, s, s)) # make (n,3) arrayreturn −spring k∗(1.0 − spring l/s3)∗r # Hooke’s law

9

def chain(Y, t):11 r , v, f = Y[0], Y[1], np.zeros((N,3))

13 rright = r[0:−1] − r [1:] # rel pos to right neighborfright = force(rright ) # force from right

15 f [0:−1] = frightf [1:] −= fright # force from left neighbor, 3rd law

17 a = (f − damp∗v)/mass + gvec # accel.v [0], v[−1] = 0., 0. # fixed ends

19 return np.array([v, a])

21 L, N, p = 2.0, 41, 0 # size, num of particles , plot numh, mass, damp = 0.01, 0.005, 0.02 # step size , mass, damping

23 r , v = np.zeros((N,3)), np.zeros((N,3))r [:,0] = np.linspace(0,L,N) # initial x positions

25 v[N//3] = np.array ([5.,0.,−5.]) # an impulsespring k, spring l = 60.0, r[1,0]−r [0,0] # spring const., relaxed length

27 Y, gvec = np.array([r, v ]), np.array ([0,0,−9.8]) # [v,a ], g vector

Page 68: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

60 Chapter 6. Oscillations and waves

29 cl=[’k’,’#dd3333’,’#9966ff’,’#6699ff’,’b’,’g’,’r’,’c’,’m’,’#3399dd’]plt . figure ()

31 ax = plt.subplot(111, projection=’3d’)for i in range(100):

33 if ( i%10 == 0):x, z = Y [0,:,0], Y [0,:,2] # x, y

35 ax.plot(x, [p]∗N, z, ’-o’, mfc=cl[p]) # mfc=marker colorp = p + 1

37 Y = ode.RK4(chain, Y, 0., h)

39 ax. set xlabel (’x’), ax. set ylabel (’t’), ax. set zlabel (’z’)plt .show()

The program uses the same vector concepts as the basic building blocksas before (see Program A:3.8, Program A:6.8). The force() function isthe same as in Program A:6.8 except column stacking (np.column stack,line 7) is used to form a 2D array so that it has the correct shape forelement-wise operations in the calculation of force (line 8). The systemdynamics (6.5) is computed in the module chain(). The input array Y[]

holds the position and velocity vectors in the first and second halves, re-spectively. As in Program A:6.8, the relative positions to the neighborson the right are obtained by subtraction of shifted arrays via slicing (Sec-tion A:6.5.5). The forces from the particle the right (line 14) are calculatedand stored. Newton’s third law is used for the forces from neighbors to theleft (line 16). Boundary condition is enforced at the ends (line 18), whichsets the velocities of particles 1 and N to zero.

The main program initializes some parameters and places the particlesuniformly along the x-axis. All particles are at rest, except one which isgiven an initial impulse (line 25). The main loop integrates the system andgraphs the 3D scatter curves of the chain periodically, separated along thetime axis and using predefined marker colors.

Program listing 6.3: Oscillation of a suspended slinky (slinky.py)

import numpy as np, visual as vp, ode, vpm2

# copy force() and chain(), per description below, from Program 6.24

L, N = 10.0, 21 # size, num of particles

Page 69: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

6.B. Program listings and descriptions 61

6 h, mass, damp = 0.02, 0.1, 0.05 # step size , mass, dampingr , v = np.zeros((N,3)), np.zeros((N,3))

8 r [:,1] = np.linspace(0,−L,N) # initial y positionsx, y, dy = r [:,0], r [:,1], r[0,1]−r [1,1]

10 spring k, spring l = 50.0, dy # spring const., relaxed lengthY, gvec = np.array([r, v ]), np.array ([0,−9.8,0]) # [v,a ], g vector

12

scene = vp.display( title =’Slinky’, background=(.2,.5,1),14 center=(0,−3∗L/3,0), forward=(0,−.2,−1))

pole = vp.curve(pos=[(−L/2,0,0),(L/2,0,0)],color=(1,1,1),radius=0.2)16 slinky = vpm.slinky(x, y, x, (0,−1,0), 5∗dy, 0.3) # slinky

18 t , bfree = 0.0, False # bfree, = True for free fallfor i in range(1000):

20 vp.rate(100)slinky .move(x, Y [0,:,1], x) # animate slinky

22 Y = ode.RK4(chain, Y, t, h)

The program uses two functions that are nearly identical to those fromProgram 6.2, force() and chain(), hence omitted in the listing. Theformer is identical, and the latter needs a slight modification: change line 18in Program 6.2 to reflect the new boundary condition such that only thevelocity of the first particle is set to zero.

It begins by importing our vpm (VPython modules) library that definesseveral classes of objects including the slinky() function, which createsslinky objects. Once created, the method slinky.move() can be used toanimate them.

The slinky() function (line 16) takes as input the positions of theparticles (coils) in three arrays (x, y, z), the orientation (a direction vector),the radius, and the thickness of coils. The move method accepts the newpositions (line 21). As the positions are updated, the slinky object willmove or oscillate.

Program listing 6.4: VPM - VPython modules (vpm.py)

# Computational modeling, J Wang, UMass Dartmouth2 # vpm − VPython modules:# pause()/wait()

4 # line()# bars()

Page 70: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

62 Chapter 6. Oscillations and waves

6 # slinky()# net()

8 # mesh()#

10 import numpy as np, visual as vp

12 def pause(scene): # pause until a key is pressedwhile (scene.kb.keys): # clear key buffer first

14 k =scene.kb.getkey()return scene.kb.getkey()

16

def wait(scene): # wait for 2nd key after a key press18 if scene.kb.keys:

return pause(scene)20

class line :22 ””” create a line by connecting points x [], y [], z [].

optionally , specify line color and thickness ”””24 def init ( self , x, y, z, linecolor =(1,1,1), thick = .05):

self . line = vp.curve(color=linecolor, radius=thick)26 self .move(x, y, z)

28 def move(self, x, y, z ): # update lineself . line .pos = np.column stack((x, y, z))

30

class bars:32 ””” create a bar graph over points x [], y [], z [],

with height h [] at each point (along y−axis).34 optionally , specify width, thckness, color , and axis ”””

def init ( self , x, y, z, h, width=0.05, thick = 0.05,36 barcolor=(1,1,1), axis=(1,0,0)):

self .bars=[vp.box(length=width, width=thick, color=barcolor,38 axis=axis) for i in range(len(x))]

self .move(x, y, z, h)40

def move(self, x, y, z, h): # update bars42 for i in range(len(x)):

self .bars[ i ]. pos = (x[i ], y[ i ]+h[i ]/2, z[ i ])44 self .bars[ i ]. height = abs(h[i ])

Page 71: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

6.B. Program listings and descriptions 63

46 class slinky :””” create a slinky by placing coils at points x [], y [], z [].

48 other input: dir=axis of alinky, r=radius of coil ,thick=thinckness of wire ”””

50 def init ( self , x, y, z, dir=(0,−1,0), r=1, thick=.2):self . slinky , self .m, self . dir = [], len(x), dir

52 for i in range( self .m):c = (0.9, 1 − 0.8∗i/ self .m, 0.1) # RGB color mix

54 self . slinky .append(vp.helix(radius=r, coils=1, color=c,thickness=thick))

56 self .move(x, y, z)

58 def move(self, x, y, z ): # update slinkyd = (x[:−1]−x[1:])∗∗2 + (y[:−1]−y[1:])∗∗2 + (z[:−1]−z[1:])∗∗2

60 d = np.sqrt(np.append(d, d[−1]))for i in range( self .m):

62 self . slinky [ i ]. axis = self . dirself . slinky [ i ]. pos = (x[i ], y[ i ], z[ i ])

64 self . slinky [ i ]. length = d[i ] # set length last

66 # build a net of quadrilaterals , 4−sided polygonsclass net:

68 ””” create a fishnet , grid points are given by x [,], y [,], z [,]other input: netcolor and thread thickness ”””

70 # j# ˆ n ∗ ∗ ∗ ... ∗

72 # | : ∗ ∗ ∗ ... ∗# | 2 ∗ ∗ ∗ ... ∗

74 # | 1 ∗ ∗ ∗ ... ∗# | 0 ∗ ∗ ∗ ... ∗

76 # | o 0 1 2 ... m −−−> idef init ( self , x, y, z, netcolor=(1,1,1), thick=.05):

78 self .m, self .n = len(x [:,0]), len(y [0,:]) # nx, ny pointsself .net=[vp.curve(color=netcolor, radius=thick)

80 for i in range( self .m + self.n)]self .move(x, y, z)

82

def move(self, x, y, z ): # update net84 for i in range( self .m): # vertical lines

self .net[ i ]. pos = np.column stack((x[i ,:], y[ i ,:], z[ i ,:]))

Page 72: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

64 Chapter 6. Oscillations and waves

86 for j in range( self .n): # horizontal linesself .net[ self .m+j].pos=np.column stack((x[:,j],y [:, j ], z [:, j ]))

88

# build a mesh of triangles from quadrilaterals90 class mesh:

””” create a mesh surface, grid points are given by x [,], y [,], z [,]92 other input: top and bottom surface colors ”””

def init ( self , x, y, z, topcolor=(1,0,0), botcolor=(0,1,1)):94 self . t = vp.faces(color=topcolor) # top, bot faces

self .b = vp.faces(color=botcolor)96 self .move(x, y, z) # set initial position

98 def corners( self , x, y, z): # cut rectangles diagonallyp = np.dstack((x, y, z)) # grid points

100 cs = np.column stack # triangles = lambda u: np.reshape(u, (−1,3)) # b c

102 a, c = s(p[:−1,:−1]), s(p [1:,1:]) # | 1/|t1 = cs((a, s(p [:−1,1:]), c)) # abc | /2|

104 t2 = cs((a, c, s(p [1:,:−1]))) # acd |/ |q = np.concatenate((t1,t2)).reshape(−1,3) # a d

106 r = np.reshape(q, (len(q)//3,3,3)) # bottom, ccw windingr = np.reshape(r [:,[0,2,1],:], (−1,3)) # back to Nx3

108 return q, r

110 def move(self, x, y, z ): # update meshself . t .pos, self .b.pos = self .corners(x, y, z) # get corners

112 self . t .make normals(), self .b.make normals() # actual normals

The VPython modules library creates several composite objects from VPythonfunctions for convenience. Except for keyboard handlers, VPM objects re-quire input positions. Other parameters like color or size have default val-ues. Once a VPM object is created, it can be animated using its move

method which updates the new position.

Page 73: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Chapter 7

Electromagnetic fields

We discuss the problem of finding the equilibrium of point charges on asphere, which is related to the Thomson model of atoms.

7.1 Equilibrium of charges on a sphere

Suppose we place N identical point charges on a non-conducting sphere.The charges are free to move on the spherical surface. What are the equi-librium positions of the charges?

The equilibrium configuration must be a potential minimum. For justa few charges, the configurations are simple. For one charge, it can beanywhere. For two charges, they must be diametrically opposite each other;and for three they form an equilateral triangle inside a great circle, and soon.

For many more charges, it is not obvious if a configuration is a globalminimum, and the problem remains unsolved. This is related to the Thom-son problem (Project P7.3). Thomson originally proposed the plum-puddingmodel to explain the atomic structure. In this model, the atomic nucleuswas assumed to be a sphere of uniform positive charge, and the electronswere spread out inside the nucleus. The electrons were balanced through

65

Page 74: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

66 Chapter 7. Electromagnetic fields

electrostatic potentials between the attractive background and the repulsiveforces among themselves.1

We approach the problem numerically, of course, by simulating the mo-tion of the charges until they reach equilibrium positions. The motion isunder the constraint that the charges remain on the sphere. This reducesthree dimensional motion to an effective two-dimensional motion. It is per-haps most straightforward to follow the polar (θ) and the azimuthal angles(φ) of the particle while keeping their radius r fixed.

The velocity and acceleration in spherical coordinates are given by

~v = rr + rθθ + r sin θϕϕ,

~a = ar r + aθ θ + aϕϕ,

ar = r − rθ2 − r sin2 θϕ2, (7.1)

aθ = rθ + 2rθ − r sin θ cos θϕ2,

aϕ = r sin θϕ+ 2r sin θϕ+ 2r cos θθϕ.

The force (or any vector) in spherical coordinates are obtained fromCartesian coordinates via the transformation

Fr

=

sin θ cosϕ sin θ sinϕ cos θcos θ cosϕ cos θ sinϕ − sin θ− sinϕ cosϕ 0

Fx

Fy

Fz

. (7.2)

Our strategy is to integrate the equations of motion in the respectivepolar θ and azimuthal ϕ directions since the motion is constrained to thespherical surface and we can ignore the radial direction. Furthermore, weassume a unit sphere r = 1. We introduce a dissipative force −b~v so thesystem will relax into a potential minimum. Hence, the equations of motionfor a particle of mass m are

maθ = Fθ − bvθ, maϕ = Fϕ − bvϕ. (7.3)

The Coulomb force on particle i is the sum over all other particles

~Fi = kq2∑

j 6=i

~ri − ~rj|~ri − ~rj |3

, (7.4)

1The plum-pudding model was later overturned by the famous Rutherford scatteringexperiment in alpha-particle and gold-foil collisions (see Section A:12.1).

Page 75: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

7.1. Equilibrium of charges on a sphere 67

where q is the charge of each particle, and ~ri, i = 1...N are the positions ofthe particles.

Below, we will set m = q = k = 1 to simplify the equations. Again likein the hockey simulation earlier (Section A:7.1), this is always possible witha proper choice of units. Rather than dealing with vθ and aθ (or vϕ, aϕ),it is easier to integrate θ and θ directly. Let w = θ, u = ϕ, we can rewriteEq. (7.3) as a system of first-order ODEs

θ = w,

ϕ = u,

w = Fθ − bw + sin θ cos θu2, (7.5)

u = (Fϕ − 2 cos θwu)/ sin θ − bu.

We now have the necessary information to simulate the motion. Thesteps are as follows:

1. Start with an initial configuration, e.g., all N particles at the equator.Initialize the velocities, e.g., all zeros except one or two particles.By usual convention, we create an array of length 4N to store thevariables. One possible scheme is,

Index 0, ..., N − 1 N, ..., 2N − 1 2N, ..., 3N − 1 3N, ..., 4N − 1Data θ0, ..., θN−1 ϕ0, ..., ϕN−1 w0, ..., wN−1 u0, ..., uN−1

2. Model the system dynamics, i.e., construct a diffeq(). For each par-ticle, compute the force from Eq. (7.4), and transform it into sphericalcoordinates according to Eq. (7.2) to obtain Fθ and Fϕ.

3. Integrate the system using an ODE solver with diffeq() until equi-librium.

We leave the implementation of the simulation to Project P7.2, andshow the results in Figure 7.1. The first figure shows the paths of the 24charges over the unit sphere. The sphere is semi transparent, effected withthe opacity parameter in VPython, so all the particles can be seen at once.All the particles were initially placed at an equator (great circle) at equalangular separation. All were at rest except one which was set to nonzerovelocities (θ, ϕ). The initial condition will not affect the final configuration,assuming it is a global minimum, but it will impact how fast the systemsettles into equilibrium.

Page 76: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

68 Chapter 7. Electromagnetic fields

Figure 7.1: Equilibrium positions of point charges on a sphere (N = 24).

Page 77: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

7.2. Exercises and Projects 69

The motion of each particle is guided by the local potential surface,but their collective motion causes the global potential surface to change.There are many local extrema, and there should be one global minimum,even though it is not clear at present whether it would imply a uniqueconfiguration for a given, larger N . As the particles move about, the totalpotential energy generally decreases, and the system gets closer to the globalminimum. But depending on the initial condition, they may encounter localminima along the way. If there is not sufficient total energy, the system cansettle into a local minimum and be stuck. To prevent this from happening,we can artificially inject energy into the system by increasing the kineticenergy when the system appears to reach an equilibrium. This may helpboost the system out of the local minima and eventually reach the globalminimum. In our example, the initial potential energy was ∼ 302.838976,and the final value is ∼ 223.347074. In between, the system went throughabout 8 local minima.

The next graph in Figure 7.1 shows the facets and edges when the systemhas reached equilibrium. It displays an interesting geometry of the finalconfiguration, consisting of 38 flat faces, 6 of which are squares, and therest are equilateral triangles.2 We leave the exploring of its many symmetryproperties to the interested reader (Projects P7.2 and P7.3). This can bedone most effectively using VPython as you can manipulate the structurewith ease.

7.2 Exercises and Projects

Exercises

E7.1 Consider the electromagnetic fields of a moving charge (nonrelativis-tic), Figure 7.2, where ~v is the velocity of the charge, b the impact

parameter, and ~R = vt i+ b j the position vector of the charge.

(a) Compute the electromagnetic fields at the origin, assuming b = 1and v = 2. The magnetic field can be calculated from the Biot-Savartlaw (A:7.42) by replacing the current Id~l with ~v, but note ~r = −~R.Plot the transverse and longitudinal components of the electric andmagnetic fields for t = −5 to +5.

2This geometry is known as a snub cube.

Page 78: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

70 Chapter 7. Electromagnetic fields

v

O

b~R

Figure 7.2: The E&M fields of a moving charge, Exercise E7.1.

(b) Animate the electromagnetic fields on a spherical surface of radius1 surrounding the origin, for the same time interval.

E7.2 (a) Prove the velocity and acceleration relations in spherical coordi-nates, Eq. (7.1).

(b) Derive the transformation matrix between Cartesian and sphericalcoordinates, Eq. (7.2).

Projects

P7.1 Calculate the potential and fields of two, quarter circular arcs shownin Figure 7.3. The rectangular box is grounded.

Figure 7.3: Potentials of quarter circular arcs for Project P7.1.

Use the overrelaxation method to compute and display the results fortwo cases: (a) the arcs are kept at the same potential, and (b) atopposite potentials. Make predictions about the field patterns beforecalculation.

Page 79: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

7.2. Exercises and Projects 71

Set the circular arcs boundary with the midpoint circle-drawing al-gorithm using integer arithmetic. Briefly, suppose the radius of thecircle is r, and its center is at the origin. Let us start at the top(i, j) = (0, r) and move to the right, then x (or i) increases by oneat every step, and y (j) decreases less frequently, until x = y, thefirst octant. From the current position (i, j), the midpoint algorithmcumulatively tracks the error, ∆ = x2 + y2 − r2, of the midpoint(i + 1, j − 1

2) of the next step. If ∆ is negative, we take the step to

the right (i+ 1, j); otherwise, to the right and below (i+ 1, j − 1).

The circle-drawing code is given below.

i , j =0, r # start from toperr = (5−4∗j)//4 # error at midpoint (1, r−1/2)while i < j: # midpoint drawing, 1st octant

i += 1if err < 0: # go right, +1,0

err += 2∗i +1else: # go right−down, +1,−1

err += 2∗(i−j) + 2j −= 1

The other parts of the circle can be obtained by symmetry from thefirst octant, e.g., (±i,±j). The radius r should be about 1

6to 1

4of the

grid dimension.

(c) It should be possible to obtain the double-arc solution from a singlearc with the additivity rule (Project A:P7.3). Carry out a numericalimplementation.

(d)∗ Repeat the simulation using the radial basis function collocationmethod of Section A:7.5.2 (either GA or MQ). Distribute boundarynodes uniformly around the box and along the circular arcs. For theinternal nodes, try either uniform or random (but somewhat evenlyspaced) distributions. Compare with the results above.

P7.2 Simulate the equilibrium process of N charges on a non-conductingsphere. It is best to complete this project with Project P7.3, as ateam project if possible.

(a) Write a program according to the outlined algorithm after Eq. (7.5).Test your program so it works for N = 2−5 and gives the correct equi-

Page 80: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

72 Chapter 7. Electromagnetic fields

librium configuration. Consider equilibrium established if the changeof the potential energy is less than 10−8 per every 10 steps, for ex-ample. Vary the linear drag coefficient b between 1 to 5 to study itseffect. Animate the motion.

(b) Assume N = 8. Predict the equilibrium configuration. Run thesimulation. Plot the potential energy as a function of time. What isthe final geometry? Is it as predicted?

(c) Again let N = 8, but instead of identical charges, assume halfof them have unity charge and the other half have 1

2. What is the

equilibrium configuration?

(d) Study the case for N = 12, 13, and 24 (identical charges again).Plot the potential as a function of time. What are the final energies?How can you be sure they are global minima? What are the geome-tries? For N = 13, compare and discuss results with Project P7.3 ifavailable.

(e)∗ How should the minimum energy scale with N? Vary N from 2to 32 (or higher depending on your tolerance for waiting), doublingN each time, and record the minimum energy, Umin(N) (turn offanimation for speedier calculation). Plot Umin(N) on a log-log scale.Comment on the results.

P7.3 Investigate Thomson’s plum-pudding model of the atom. It is best tocomplete this project with Project P7.2, as a team project if possible.

The plum-pudding model assumes that the atomic nucleus is a sphereof uniform positive charge Z, and the electrons, or corpuscles asThomson called them, are arranged inside the nucleus in such a wayas to minimize the energy. Let R be the radius of the nucleus, Z thenuclear charge, and N the number of electrons.

(a) With the same unit convention as Eq. (7.5), show that the electron-nucleus potential is given by

V (r) =

Z

2R

[r2

R2− 3

]

, if r ≤ R,

−Zr, if r > R.

(Plum pudding model) (7.6)

Page 81: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

7.2. Exercises and Projects 73

Also see Project A:P4.4 for an equivalent expression of gravitationalpotential between a point mass and a uniform sphere.

(b) Write a program to simulate the equilibrium process of the Thom-son atom. Include both the electron-nucleus and electron-electronforces. Combine elements from Program A:6.8 and the program writ-ten for Project P7.2 (if applicable).

Set R = 1, N = Z (neutral atom), and run the simulation forZ = 1−4. Verify that the equilibrium configurations are as expected.Compare with the results of Project P7.2 if available. Note, in suchcomparisons, consider only the electron-electron potential. Further-more, this potential must be scaled by r, the average radius of theelectrons for direct comparison to charges on the unit sphere.

(c) Repeat the simulation for Z = 5. Are the electrons located on aspherical surface? Why?

(d) The ionization potential is the energy required to remove oneelectron from the atom. Calculate the single ionization potential ofthe Thomson atom for Z = 1 − 20. Plot the results as a function ofZ. Is there a “shell” structure? Are there magic numbers where theatom is especially stable (high ionization potential)?

(e) Study Z = 11 − 13 in some detail. Predict the equilibrium posi-tions. Discuss and compare the actual results with your predictions.Characterize the final geometry and correlate it with the ionizationpotentials above.

Page 82: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566
Page 83: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Chapter 8

Time-dependent quantummechanics

To treat scattering problems, we discuss the split evolution operator methodwhich is second order in time. For quantum transitions and coherent stateevolution, we introduce an efficient method using basis expansions, thecoupled channel method. We apply it to study transitions caused by inter-actions with laser fields.

We also discuss the framework of Gaussian integration (Section 8.A)and the less numerical topic of program profiling (Section 8.B).

Atomic units are used unless otherwise noted.

8.1 Scattering and split evolution operator

We have seen from Section A:8.3.1 that the split operator method producesa stable solution and preserves the normalization to a high degree of accu-racy, provided the time step is very small and the potential is “weak” sinceit is a first order approximation. This causes long integration times in orderto maintain accuracy and keep error in check. Can we modify the methodto achieve a higher order in time? It turns out, indeed we can, with just alittle more effort.

75

Page 84: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

76 Chapter 8. Time-dependent quantum mechanics

8.1.1 Split evolution operator method

If we were to split the Hamiltonian like in Eq. (A:8.25) and achieve a higherorder accuracy in ∆t, we would have to expand each term to higher powerssuch as H2 in both the numerator and the denominator. However, thiswould mean that higher order derivatives would pop up, such as ∂3/∂x3

and higher. This surely would be a recipe for numerical disaster. The trickis to split not the full Hamiltonian H , but the kinetic and potential energyoperators.

Let T and V be the kinetic and potential energy operators, respectively.From Eq. (A:8.20), we have

H = T + V , T = −12

∂2

∂x2, V = V (x). (8.1)

Note that T is a differential operator, and V is a regular function operator.We would have to make some approximation to the operator e−i(T+V )∆t.

Suppose we want to keep the approximated evolution operator in someexponential form in order to preserve unitarity, i.e., normalization (we wouldworry about how to evaluate them shortly). Because of the differentialnature of T , it is preferable to separate T from V in the exponents fortechnical reasons.

At first, it might seem reasonable that we could try something like

e−i(T+V )∆t = e−iT∆te−iV∆t. (8.2)

But this equality is not exactly correct. It is only an approximation becausethe operators T and V do not commute in general. In other words, the orderof the operators is important. For instance,

T V ψ(x) = −12(V ψ)′′ 6= −1

2V ψ′′ = V Tψ(x).

In quantum mechanics, we express this inequality as a commutator

[T , V ] = T V − V T . (8.3)

Two operators commute if their commutator is zero. But [T , V ] 6= 0 fornon-constant potentials, so they do not commute in general. Even so, ourfirst intuition (8.2) is not without merit, because it can be shown that

e−i(T+V )∆t = e−iT∆te−iV∆t +1

2[T , V ](∆t)2 +O[(∆t)3], (8.4)

Page 85: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.1. Scattering and split evolution operator 77

and it is therefore accurate to first order in ∆t, as is e−iV∆te−iT∆t. Equa-tion (8.4) is a manifestation of the Baker-Campbell-Hausdorff formula fornon-commuting operators.

If we split the potential operator into two halves and sandwich the ki-netic energy term in between as1

e−i(T+V )∆t ' e−i 12V∆te−iT∆te−i 1

2V∆t, (8.5)

we can show that Eq. (8.5) is accurate to second order in ∆t, and the leadingerror correction is of third order (Exercise E8.1),

− i

12

([[T , V ], T

]+

1

2

[[T , V ], V

])

(∆t)3. (8.6)

Equation (8.5) is the approximate evolution operator we will use.

Over a discrete grid, the split operator (8.5) is most efficiently evaluatedby Fourier transform. The reason is because the exponential factor, eikx

(plane wave), is an eigenfunction of kinetic energy,

T eikx =1

2k2eikx, T neikx =

(1

2k2)n

eikx. (8.7)

Accordingly, by the definition of an exponential operator (A:8.22), we have

e−iT∆teikx =

[

1 + (−iT∆t) + (−iT∆t)2

2!+ ...

]

eikx

=

[

1 + (−i12k2∆t) +

(−i12k2∆t)

2

2!+ ...

]

eikx

= e−i 12k2∆teikx. (8.8)

It says that the result of an exponential differential operator acting on aplane wave function is a constant times the plane wave itself, i.e., eikx is aneigenfunction of the operator e−iT∆t with the eigenvalue e−i 1

2k2∆t. This fact

will aid us immensely.

1Here is the art of scientific computing showing itself again.

Page 86: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

78 Chapter 8. Time-dependent quantum mechanics

As before, let ψ0 = ψ(x, t0), ψ1 = ψ(x, t0+∆t). Furthermore, we define

the following Fourier pair

φV (k) =1√2π

∫ ∞

−∞e−i 1

2V (x)∆tψ0(x)e−ikx dx, (8.9)

e−i 12V (x)∆tψ0(x) =

1√2π

∫ ∞

−∞φV (k)e

ikx dk. (8.10)

Given the evolution operator (8.5) and the above definitions, we obtainψ1 from ψ0 as

ψ1 = e−i 12V∆te−iT∆te−i 1

2V∆tψ0

= e−i 12V∆te−iT∆t 1√

∫ ∞

−∞φV (k)e

ikx dk

= e−i 12V (x)∆t 1√

∫ ∞

−∞e−i 1

2k2∆tφV (k)e

ikx dk. (8.11)

By the position-momentum relations (A:8.33a)–(A:8.33b), the last integralis equivalent to two nested Fourier transforms,2

ψ(x, t0 +∆t) = e−i 12V (x)∆tF−1

[

e−i 12k2∆tF [e−i 1

2V (x)∆tψ(x, t0)]

]

. (8.12)

This is the final result of the split evolution operator method. For conve-nience, we will refer to it as the SEO method. It is a variation of a class ofmethods known as pseudospectral methods [10].

The SEO algorithm is elegantly expressed as a pair of Fourier and inverseFourier transforms that can be efficiently implemented using FFT. The codesnippet is as follows.

dk = 2∗np.pi/(M∗h) # ∆k2 k = np.arange(−M//2, M//2)∗dk # k vector

eVdt = np.exp(−1j∗0.5∗V∗dt) # e−iV ∆t/2

4 ko = np.concatenate((k[M//2:], k[:M//2])) # k ordered properly

eTdt = np.exp(−1j∗0.5∗ko∗ko∗dt) # e−ik2∆t/2

6 while (t<=4):

vpsi = eVdt∗psi # ϕ = e−iV∆t/2ψ0

2This relation may be more formally obtained by inserting a complete basis set,

|k〉, using Dirac notation: e−iT∆te−i 12V ∆t|ψ0〉 =

∫dk e−iT∆t|k〉〈k|e−i 1

2V ∆t|ψ0〉 =

∫dk e−i 1

2k2∆t|k〉〈k|e−i 1

2V∆t|ψ0〉. The last step is identical to the integral in Eq. (8.11).

Page 87: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.1. Scattering and split evolution operator 79

8 phi = eTdt∗np.array(fft. fft rec (vpsi , L)) # φ = e−ik2∆t/2F [ϕ]psi = eVdt∗fft. ifft rec (phi, L) # ψ1 = e−iV∆t/2F−1[φ]

10 ......

The above code segment may be directly integrated into Program A:8.3,assuming L (integer), M = 2L, and N =M − 1 are properly set (for largernumber of grid points ≥ 128, use NumPy FFT codes for speed; see end ofSection 8.1.2). It defines the momentum grid size ∆k and initializes the mo-mentum grid according to Eq. (A:8.34), interpreting the momentum rangecontaining both positive and negative values as required physically. The ex-ponential potential energy operator is readily evaluated using element-wisearray operations given the potential array V.

We should take care, however, to calculate the kinetic operator correctly.Because the FFT code returns the negative momentum values in the secondhalf of the array, we reorder the momentum (line 4) accordingly. When theexponential kinetic energy operator is evaluated next, we know it will be inthe correct order for multiplication to the returned FFT array (line 8).

8.1.2 Scattering from a potential barrier

Let us apply the split evolution operator method to quantum scatteringfrom a potential barrier,

V =

V0, 0 ≤ x ≤ a,0, x < 0, or x > a.

(8.13)

The potential barrier is zero everywhere except between 0 ≤ x ≤ a whereit is a constant. Classically, a particle with energy E below the top ofthe barrier (E ≤ V0) will be reflected back, and will go over the barrier ifE > V0.

Quantum mechanically, we represent the particle as a Gaussian wavepacketaway from the barrier with a net momentum k0 =

√2E, and evolve the wave

function using the SEO method. The results are displayed in Figure 8.1 andFigure 8.2.

Initially, a Gaussian wavepacket (width σ = 2) is placed to the left ofthe barrier with a net momentum k0 =

√10 (E = V0 = 5, a = 1). As usual,

the wavepacket broadens as it moves toward the barrier. When the leadingedge reaches the barrier, some waves enter the barrier region, and some re-flected. As more waves encounter the barrier, strong interference occurs at

Page 88: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

80 Chapter 8. Time-dependent quantum mechanics

Figure 8.1: Scattering from a potential barrier (E = V0 = 5).

the edge of the barrier, leading to sharp, enhanced peaks (Figure 8.1). Thishappens within a relatively short time interval. Afterward, the wavepacketis separated into two wavepackets, one transmitted to the right and anotherreflected to the left. Their center velocities are equal in magnitude but op-posite in direction (elastic scattering). Until they reach the boundaries, thewavepackets will move and broaden nearly like independent wavepackets.To minimize boundary effects (wrap-around), we chose the space range fromx = −30 to +30, M = 210 = 1024 points, and ∆t = 0.01.

Page 89: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.1. Scattering and split evolution operator 81

x

−8 −6 −4 −2 0 2 4 6

t

0

1

2

3

4

|ψ|2

0.0

0.1

0.2

0.3

0.4

0.5

Figure 8.2: Scattering waves from a potential barrier.

The spatial and temporal distribution of scattered waves is shown inFigure 8.2. We can see at a glance the transmitted and reflected waves astwo ridges running along x ∼ c ± k0t lines (c = some constant). Theselines define the classical trajectories in such a scattering event. If we thinkof the wavepacket as consisting of particles of energies above and belowthe barrier, we can visualize some particles going over or reflecting backalong these trajectories. Being matter waves in quantum mechanics, we seethe waves peak near the classical trajectories. In addition, we see stronglydeformed waves near x = 0 where incoming and outgoing waves mix andinterfere. We see little such effect at the back edge of the barrier.

Transmission and reflection coefficients

We can calculate quantitative probabilities of transmission and reflectionobserved qualitatively above in collisions between the wavepacket and thebarrier. To obtain these probabilities, we wait a sufficiently long time afterthe collision is over, and numerically integrate the probability density in

Page 90: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

82 Chapter 8. Time-dependent quantum mechanics

the positive (or negative) half space as

Tx =

∫ ∞

0

|ψ(x, t→∞)|2dx, Rx = 1− Tx. (8.14)

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0E/V0

0.0

0.2

0.4

0.6

0.8

1.0

transm

ission/reflection

transmission

reflection

allowed

zone

forbidden

zone

Figure 8.3: Transmission () and reflection (•) coefficients from a potentialbarrier (a = 1, V0 = 1). The solid curves are analytic results (8.15).

Figure 8.3 shows sample transmission (Tx) and reflection (Rx) coeffi-cients in collisions of a wavepacket with a potential barrier of width a = 1a.u. (0.53 A) and height V0 = 1 a.u. (27.2 eV). The width of the wavepacketis σ = 6, and the number of grid points is 2048.

The transmission coefficient Tx starts from zero at very low energiesE/V0 ∼ 0, and increases with increasing E. In contrast, classical trans-mission should be zero up to E/V0 < 1 since the particle does not havesufficient energy to go over the barrier. This zone 0 < E/V0 < 1 is labeledclassically forbidden in Figure 8.3. Even if we take into account the en-ergy spread in the wavepacket, it is not enough to yield any appreciabletransmission probability classically.

Quantum mechanically, however, the particle can tunnel through thebarrier when E < V0. As a result, the tunneling probability increases with

Page 91: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.1. Scattering and split evolution operator 83

energy. The cross-over when Tx = Rx = 12occurs well within the classically

forbidden zone.

Toward higher energies in the classically allowed zone, E/V0 > 1, thetransmission coefficient approaches the asymptotic limit of one. Again, thisis unlike the classical behavior where the transmission coefficient is expectedto jump to one as soon as we enter the classically allowed zone.

Scattering from a simple potential barrier is also solvable analytically.The transmission coefficient, or more precisely its inverse, is given by

Tx−1 = 1 +

V 20

4E|V0 − E|×

sinh2(

a√

2(V0 − E))

, E ≤ V0,

sin2(

a√

2(E − V0))

, E > V0.(8.15)

Comparing with the analytic result (solid curves in Figure 8.3), we seeoverall excellent agreement between the numerical data and theoretical re-sults. On closer examination, we notice that the numerical Tx data areslightly above the analytic curve in the classically forbidden region, andslightly below the curve in the classically allowed zone.

To understand this, we recall that our particle is actually a wavepacketthat has a momentum spread. It does not have a well defined energy.Rather, we can think of it as a distribution of energies below and above thecenter energy E. Consequently, the high-energy tail in the wavepacket cantunnel, or even go over the barrier directly, with much higher probabilitiesthan the center energy. This leads to the enhanced probability, therefore ahigher Tx, compared with the analytic results for a given (well-defined) E.On the other hand, the opposite happens in the classically allowed zone.Here, the low-energy tail of the wavepacket has reduced probabilities oftransmission, thus a lower Tx than the analytic curve.

Resonant scattering

If the potential barrier is replaced by the potential well (Figure 8.4), thereis no classically forbidden zone, and tunneling will play no role in the scat-tering. What will the scattering picture look like?

The calculated transmission and reflection coefficients are shown in Fig-ure 8.5 for a potential well with a = 3.8 a.u. (2 A) and V0 = 1 a.u. At lowenergies, the behavior is qualitatively similar to scattering from a potentialbarrier, but the rise is much more rapid. Even though there is no classically

Page 92: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

84 Chapter 8. Time-dependent quantum mechanics

xa

V (x)

−V0

E

Figure 8.4: The potential well.

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0E/V0

0.0

0.2

0.4

0.6

0.8

1.0

transm

ission/reflection

transmission

reflection

Figure 8.5: Transmission () and reflection (•) coefficients from a potentialwell (a = 3.8, V0 = 1). The solid curves are analytic results (8.15).

forbidden zone, transmission starts from zero at E ∼ 0, not from one asintuition might lead us to believe.

More interesting still is the fact that Tx reaches one at E ∼ 0.4 first, andagain at ∼ 2, etc. These maxima may be traced to the resonance conditionof scattering at certain energies. For a potential well, the analytic result(8.15) is equally valid if we replace V0 → −V0. We see that Tx = 1 wheneversin(a

2(E + V0)) = 0, i.e.,

a√

2(E + V0) = nπ, E =n2π2

2a2− V0, n = 1, 2, 3... (8.16)

Page 93: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.1. Scattering and split evolution operator 85

This condition corresponds to the de Broglie wavelength

a = nλ

2. (8.17)

When the above condition is satisfied, standing waves are formed inside thepotential well (see Eq. (A:6.71), Section A:6.6, Figure A:6.17). When theincident wave is reflected back and forth in the well, they end up in phasewith each other, and any buildup inside the well is “swept” over by theincident wave. The potential well is transparent to wave propagation atthis energy. The two peaks in Figure 8.5 are for n = 2 and 3, respectively(the n = 1 resonance is forbidden in this case because E would be negativeaccording to (8.16)). At higher energies, resonance becomes less noticeablesince the transmission coefficient is practically unity. Note that quantumresonance (8.16) is basically a statement of wavelength matching, analo-gous to frequency matching in the classical resonance of a driven oscillator(A:6.6).

The simulation reproduces the resonance structures in the transmissioncoefficient. But, the numerical values get close to one near the resonances,but not fully reaching one. This is because the wavepacket is not monochro-matic, as stated earlier. It also explains the discrepancy between numer-ical and analytic results for E/V0 values in [0.2, 2]. If the width of thewavepacket is increased, the momentum distribution will become sharper,and the discrepancy will become negligible.

Computational speed and profiling

There are several factors we need to consider for accurate extraction oftransmission coefficients. As mentioned before, in order to compare withtheoretical predictions for monochromatic beams, we should use broadwavepackets in position space. We have used σ = 6 for the calculationsdiscussed in above. This requires a large range for the propagation of thewave function from beginning to end to avoid spurious wrap-around effectsat the boundaries. This is especially true for low energy collisions whichoccurs over a longer period of time, and the inherent high-energy tails cantravel quite large distances.

As the momentum distribution becomes narrower, we also need to in-crease the space range (d) so that the momentum grid size, ∆k = 2π/dfrom Eq. (A:8.34), is small enough to adequately sample the wave functionin momentum space. A larger space range requires more grid points.

Page 94: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

86 Chapter 8. Time-dependent quantum mechanics

Even though FFT is fast, we need still faster FFT codes due to repeatedcalls in the SEO method. We can guess that Fourier transforms take upthe most computing time. But it is generally a good idea to profile codeexecution before taking optimization measures (see Section 8.B). Profilingshows that our standard Python FFT functions fft rec() and ifft rec()

would be too slow for scattering calculations at many energies. Fortunately,faster versions of compiled FFT codes are available from NumPy libraries.We have used these codes in our calculations. The only required change isthe two lines to the snippet given earlier.

phi = eTdt∗np.fft. fft (vpsi) # φ = e−ik2∆t/2F [ϕ]psi = eVdt∗np.fft. ifft (phi) # ψ1 = e−iV∆t/2F−1[φ]

With NumPy FFT functions, we can solve scattering problems withlarge grid sizes (e.g., 4096 or more) with interactive animation in real time,even on a personal computer. In fact, we can even extend the SEO methodto 2D with 2D FFT.

8.2 Quantum transitions and coupled

channels

So far (Section A:8.3 and 8.1) we have discussed simulation methods thatsolve the TDSE on a discretized space grid. Another class of methods ap-proach the problem differently, and in some ways complementarily. Ratherthan finding the wave function on a grid, we solve for it within a chosenbasis set. The space variable x is continuous. The basis functions are calledchannels, and we shall refer to it as the coupled channel (CC) method. Webriefly described a limited two-channel method in Section A:8.4. That re-striction is removed here, allowing for an arbitrary number of N channelsto be included. This method is particularly suited to studying quantumtransitions between states of quantum systems interacting with externalperturbations (Ch. XII of Ref. [8]).

8.2.1 The coupled channel method

In developing the CC method, it is useful for us to picture a concrete sys-tem, such as the particle in a box (infinite potential well). The particleis prepared in a well-defined initial state. An external interaction, such as

Page 95: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.2. Quantum transitions and coupled channels 87

a laser, is turned on at t = 0 and turned off at t = τ . During the timethe laser is on, the electron-laser interaction will cause the electron’s wavefunction and density to change. Our objective is to follow that change intime.

Let us assume that the full Hamiltonian has two parts

H = H0 + V, (8.18)

where H0 is the unperturbed Hamiltonian of the isolated system withoutany external interactions, and V the perturbation. The decompositionshould be such that the solutions of the isolated system are already known,exactly or numerically. For instance, for the particle in a box, the unper-turbed Hamiltonian H0 consists of kinetic energy and the infinite potentialwell, and V is any other external interaction. Even though V is called theperturbation, it is not necessarily small.

The solutions of the unperturbed Hamiltonian consist of a set of eigen-states un(x) with eigenenergies En, namely,

H0un = Enun, (8.19)

with n being the quantum number.3 As stated above, we assume thatEq. (8.19) have been solved, so un(x) and En are known. They will be ourbasis functions.

The spatial wave function un is assumed to be normalized and orthog-onal, i.e.,

u∗m(x)un(x)dx = δmn. (8.20)

Including the exponential time factor, the time-dependent eigenfunctionφn(x, t) of unperturbed stationary state n is

φn(x, t) = un(x) exp(−iEnt/~). (8.21)

Since un(x) are eigenfunctions of H0 (8.19), we have

i~∂φn

∂t= H0φn. (8.22)

3Generally there is an infinite number of eigenstates, which make up the Hilbertspace.

Page 96: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

88 Chapter 8. Time-dependent quantum mechanics

Now imagine the electron is in some initial state φ1 (the ground state,for instance) at t = 0. Without any external interaction, nothing much hap-pens. To probe the system and make something happen, we start switchingon the laser. The electron will be driven (perturbed) by the laser, and willexchange energy with the laser field. Therefore, its initial state will haveto change. Where does it go? It will have to go into states allowed andavailable, i.e., the open eigenstates of H0. In other words, the system willundergo transitions to other states.

Let ψ(x, t) denote the wave function of the full Hamiltonian of the per-turbed system. We expect that, according to our discussion above, ψ(x, t)should be a mixture of all the possible eigenstates of H0,

ψ(x, t) =∑

n

an(t)φn(x, t) =∑

n

an(t)un(x) exp(−iEnt/~). (8.23)

Equation (8.23) is a superposition of eigenstates, and is known as a coherentstate. The amplitudes, an(t), tell us how much state n contributes to thecoherent state.

Formally, we can extract an(t) by projection in the Hilbert space. Thisis done by multiplying both sides of Eq. (8.23) by u∗m(x), and integratingout x as

u∗m(x)ψ(x, t)dx =∑

n

an(t) exp(−iEnt/~)

u∗m(x)un(x)dx

=∑

n

an(t) exp(−iEnt/~)δmn, (8.24)

where we have used the orthogonality relation (8.20). The Kronecker deltafunction δmn kills the summation so only one term (n = m) survives,

an(t) = exp(iEnt/~)

u∗n(x)ψ(x, t)dx. (8.25)

The physical interpretation of an is that they give the probability offinding the electron in state n

Pn(t) = |an(t)|2. (8.26)

This is the occupation (or transition) probability of state n. At any giventime, the total probability of the system summed over all states must beunity,

n

Pn(t) =∑

n

|an(t)|2 = 1. (8.27)

Page 97: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.2. Quantum transitions and coupled channels 89

Since we do not know the wave function ψ(x, t), our goal now is tofind the coefficients, an(t), which uniquely determine ψ. Once found, wecan extract any physical observable from the wave function. Clearly, theydepend on the perturbation (the laser field), V (x, t) (we will give an explicitform soon).

The wave function ψ(x, t) in Eq. (8.23) evolves according to the TDSE(A:8.20), with the full Hamiltonian (8.18). To solve for the amplitudesan(t), we first substitute ψ(x, t) into the LHS of Eq. (A:8.20) to obtain

i~∂ψ

∂t=∑

n

(

i~dan(t)

dt+ Enan(t)

)

un exp(−iEnt/~). (8.28)

Next, we take the full Hamiltonian (8.18) to operate on ψ on the RHSof Eq. (A:8.20), obtaining

Hψ = (H0 + V )∑

n

an(t)un exp(−iEnt/~)

=∑

n

(En + V )an(t)un exp(−iEnt/~), (8.29)

where we have made use of the eigenequation (8.19).Setting Eqs. (8.28) and (8.29) equal to each other, and canceling the

terms involving En, we have

i~∑

n

dan(t)

dtun(x) exp(−iEnt/~) =

n

an(t)V (x, t)un(x) exp(−iEnt/~).

(8.30)Equation (8.30) still involves summations over the eigenstates un. We

perform the same projection as in Eq. (8.24) onto state u∗m such that

i~∑

n

dan(t)

dtexp(−iEnt/~)δmn =

n

an(t)Vmn(t) exp(−iEnt/~). (8.31)

Here, Vmn is given by

Vmn(t) =

u∗mV (x, t)undx, (8.32)

and is known as the transition matrix element. Note that Vmn(t) does notdepend on x, and is a function of time only if the perturbation V (x, t) isexplicitly dependent on t.

Page 98: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

90 Chapter 8. Time-dependent quantum mechanics

Eliminating the sum on the LHS of Eq. (8.31) and moving the factorsto the RHS, we obtain

dam(t)

dt= − i

~

n

Vmn(t) exp(iωmnt)an(t), (8.33)

with ωmn = (Em − En)/~, m, n = 1, 2, ..., N.

The value ~ωmn gives the transition energy between states m and n, andN specifies the number of states (or channels) included.

Equation (8.33) is the final result of the CC method. We have managedto transform the TDSE (A:8.20) into a system of coupled ODEs. They canbe solved efficiently with our ODE solvers for a given interaction V (x, t)and initial values an(0). The interaction dynamics is entirely determinedby the matrix elements Vmn. We can extract transition rates from theamplitudes after the interaction is switched off. The operations in (8.33)involve complex numbers. Conversion to real arithmetic is discussed inSection 8.C.

Once the amplitudes are available, we can obtain any physical quantities.For instance, the expectation value of energy is

〈E〉 = 〈H0〉+ 〈V 〉 =∑

En|an|2 +∑

m,n

a∗man exp(iωmnt)Vmn, (8.34)

and that of position from Eq. (A:8.16) is reduced to

〈x〉 =∑

m,n

a∗man exp(iωmnt)

u∗mxundx =∑

m,n

a∗man exp(iωmnt)xmn. (8.35)

The values of Vmn and xmn are evaluated in the unperturbed basis.We took the physically direct approach in the above formulation by

explicitly including the exponential time factor exp(−iEnt/~). If there isno interaction, V = 0, the amplitudes will be constant from Eq. (8.33), i.e.,an(t) = an(0).

It is also possible to absorb the exponential factor exp(−iEnt/~) in an(t)to arrive at a slightly different form. Let

bn(t) = an(t) exp(−iEnt/~), ψ(x, t) =∑

bn(t)un(x). (8.36)

We can show that Eq. (8.33) reduces to (Exercise E8.2)

dbm(t)

dt= − i

~

[Embm(t) +

n

Vmn(t)bn(t)]. (8.37)

Page 99: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.2. Quantum transitions and coupled channels 91

We can rewrite Eq. (8.37) in matrix form as (in a.u.)

db

dt= −i

[E+V

]b, (8.38)

where b is a column matrix, E a diagonal matrix with elements Emδmn, andV with elements given in Eq. (8.32). This alternative form is more compactfor numerical computation, and may be more efficient for certain problems.We also see that if the interaction is zero, bn(t) = bn(0) exp(−iEnt/~).

The CC method is a powerful method for studying quantum dynamics,and can be readily extended to 2D or 3D systems. However, like everymethod, it has limitations. The sum in Eq. (8.33) runs, in principle, overall possible eigenstates of the unperturbed system, n = 1 to ∞. But inpractice, we can only include a finite number of states, say 1 to N . The sumis truncated, forming a finite system (reduced Hilbert space [3]). Usually,the larger the N , the better the approximation. In some cases, not all statesare accessible due to energy or symmetry consideration. Then, only a subsetof the relevant states need to be included. Another difficulty arises if weneed to include continuum states, i.e., unbound states, in the calculation.For instance, ionization studies involve the continuum states. In such cases,some additional approximation of the continuum states would be necessary,such as the discretization of the continuum.

8.2.2 Particle in a box driven by a laser

Let us explore quantum transitions in the interaction of a particle in a boxwith a laser field. This example will illustrate the general implementationof the CC method. The same approach may be applied to other systemssuch as the SHO [37] (see Project P8.6).

Laser field and matrix elements

For a particle in a rigid box (infinite potential well) of width a, the unper-turbed wave function and eigenenergies are

un(x) =

2

asin(

ax), En =

π2~2

2mea2n2. (8.39)

We use me for mass to avoid confusion with the index number m. Theeigenstates are the same as the classical standing waves on a string (A:6.72)(Figure A:6.17).

Page 100: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

92 Chapter 8. Time-dependent quantum mechanics

The laser-electron interaction potential is assumed to be

V (x, t) = eF (t)(x− a

2), (8.40)

where e is the electron charge and F (t) is the electric field of the laser pulse.We have chosen the zero-point potential at x = a/2 for convenience.

We choose the laser pulse F (t) to have a proper envelope so it is turnedon and off gradually,

F (t) = F0 sin2(πt/τ) cos(ωLt), 0 ≤ t ≤ τ. (8.41)

Here F0 is the field amplitude, τ the laser duration, and ωL the laser cen-ter frequency. The envelope function sin2(πt/τ) ensures that the laser isramped up and down from zero (Figure 8.6, left).

0.0 0.2 0.4 0.6 0.8 1.0t/τ

−1.0

−0.5

0.0

0.5

1.0

Lase

r field (arb. units)

−10 −5 0 5 10ω/ωL

10-3

10-2

10-1

100

101

102

103

104

Power spectrum

Figure 8.6: The laser field (left), Eq. (8.41), and its power spectrum (right).

Because the duration (τ) of the laser pulse is finite, the power spectrum(Figure 8.6, right, computed from F (t) by FFT, Exercise E8.3) has a distri-bution in (virtual) photon energies. The two peaks show the predominantenergies at the center frequency ω = ±ωL over a continuous backgroundwhich is rapidly decreasing with increasing |ω|. If the duration is large, thebackground will become negligible, and the laser approximates a monochro-matic beam of well-defined color.

Accordingly, the matrix element (8.32) is

Vmn(t) = eF (t)

u∗m

(

x− a

2

)

undx = eF (t)(

xmn −a

2δmn

)

. (8.42)

Page 101: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.2. Quantum transitions and coupled channels 93

With the unperturbed eigenstates (8.39), we can find analytic expres-sions for xmn (Exercise E8.3),

xmn =

a/2, if m = n,0, if m 6= n and m+ n is even,

−8aπ2

mn

(m2 − n2)2, if m 6= n and m+ n is odd.

(8.43)

Combining Eqs. (8.42) and (8.43), we have the interaction matrix ele-ments

Vmn(t) = −eF (t)×

8a

π2

mn

(m2 − n2)2, if m 6= n and m+ n is odd,

0, otherwise.

(8.44)The value Vmn is zero unless m + n is odd. The states of even and oddquantum number n have odd and even parities, respectively, about thecenter of the potential well, i.e., un(a/2− x) = (−1)n+1un(a/2 + x). Equa-tion (8.44) shows that the interaction only couples states of opposite pari-ties. Symmetry-dependent coupling is common in many actual interactingsystems, giving rise to transition selection rules. We can regard the inter-action with the laser (8.40) as a dipole operator. The dipole selection rulefor particle in a box is that m+ n must be odd.

If we assume that the particle is in the ground state (n = 1) at t = 0,the initial condition is

an(0) =

1, if n = 1,0, if n 6= 1.

(8.45)

With Eqs. (8.41), (8.44), and (8.45), we are ready to solve Eqs. (8.33) or(8.37) for the amplitudes, either an(t) or bn(t) (note |an(t)| = |bn(t)|). Thefull program is given in Program 8.1. Its core part consists of functions toevaluate the transition matrix and equations of motion. It is listed below.

def flaser (t ): # laser field2 return F0∗(np.sin(np.pi∗t/tau))∗∗2 ∗ np.cos(omegal∗t)

4 def vmat(a, N): # generate <m|V|n>vmn = np.zeros((N, N))

6 c = −8.0∗a/np.pi∗∗2for m in range(N):

Page 102: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

94 Chapter 8. Time-dependent quantum mechanics

8 for n in range(m+1, N, 2): # every other state is oddvmn[m, n] = c∗(m+1)∗(n+1)/((m−n)∗(m+n+2))∗∗2

10 vmn[n, m] = vmn[m, n] # symmetryreturn vmn

12

def cc box(bn, t ): # CC for particle in box14 return −1j∗(En∗bn + flaser(t)∗np.dot(vmn, bn))

The module vmat() computes the time-independent part, vmn, of thetransition matrix Vmn(t) = vmnF (t) from Eq. (8.44). It first initializes theN × N matrix to zero. The nested loops iterate through the rows andcolumns, calculating the upper diagonal matrix elements only and filling inthe lower diagonals by symmetry. The inner loop starts one position off thediagonal and skips every other element with stride 2, because the diagonaland the skipped elements are zero as the sum of their indices m+n is odd.

We choose to solve the alternative form (8.38) since we can take advan-tage of NumPy arrays to evaluate the RHS efficiently and compactly. Thefunction cc box() computes in a single line the equations of motion of theCC method. The first term uses element-wise multiplication assuming En

is an array holding the eigenenergies, and the second term invokes matrixmultiplication via np.dot().

The main program integrates cc box() with RK4 and the results aredisplayed for probability densities as a function of time in Figure 8.7. Thebox width is a = 4 a.u. (2.1 A), so the energies of the ground and firstexcited states are E1 = 0.308 and E2 = 1.23, or 8.38 eV and 33.5 eV,respectively.

The field amplitude F0 indicates the interaction strength. The relativestrength, however, also depends on the characteristic energy scale of thesystem, e.g., the ground state energy. Over the width of the potential well,the change in the potential energy due to perturbation is eF0a. As such,we define the characteristic field amplitude as

Fc =E1

a=

π2~2

2meea3. (8.46)

If F0 is small compared to the characteristic value F0 Fc, the perturbationis weak. In this regime, we expect the transition amplitudes to be small,and they can be described by perturbation theory. The limit F0 Fc

corresponds to strong fields.

Page 103: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.2. Quantum transitions and coupled channels 95

0 1 2 3 4x (a.u.)

0.0

0.2

0.4

0.6

0.8

1.0

t/τ

Figure 8.7: Top: snapshots of the real (red) and imaginary (blue) parts ofthe wave function and probability density for a particle in a box in a laserfield. Bottom: filled contour distribution of probability density in spaceand time.

Page 104: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

96 Chapter 8. Time-dependent quantum mechanics

The results shown in Figure 8.7 are for intermediate fields with thefollowing parameters in atomic units (and MKS): amplitude F0 = Fc =0.077 (4×1010 V/m), duration τ = 30 (7.3×10−16 s), and center frequencyωL = E2 − E1 = 0.924 (3.8 × 1016 rad/s). The value of ωL is chosen to beresonant with the transition between ground and first excited states.

Up to t/τ ∼ 13, the ground state remains the dominant state, and there

is little visible change in the probability density. The laser field is stillramping up, and there is no significant transition to other states yet. Thereal and the imaginary parts (Figure 8.7, top) of the wave function ψ(x, t)oscillate as they must with the simple exponential time factor Eq. (8.21),but the probability density |ψ|2 is roughly stationary.

As time increases, we observe the peak of |ψ|2 to move away from thecenter of the well, its shape significantly distorted from a sinusoidal shape,and oscillating back-and-forth until the pulse ends. From the filled contourdistribution (Figure 8.7, bottom, made with plt.contourf()), and roughlybetween t/τ = 0.4 to 1, we identify about 2.5 full oscillations. This meansthat the oscillation period is approximately 7, very close to that of the laser,TL = 2π/ωL = 6.8.

Classically, we would expect that the motion of the particle should matchthe oscillation period of the laser field (Section A:6.1). As discussed ear-lier, the quantum equivalent is given by Ehrenfest theorem (A:8.19) thatdescribes the change of the expectation value 〈x〉 (see Figure A:8.4). Here,the external laser field plays the role of restoring force of the SHO potential.

The state occupation probabilities, Pn = |bn(t)|2, are shown in Fig-ure 8.8. Five states (channels) are included in the calculation. Throughoutthe pulse duration, only the first excited state (n = 2) has appreciable prob-ability other than the ground state. Recall that the laser center frequencyis resonant with n = 1 → 2 transitions, we expect strong coupling to thefirst excited state. Other states n = 3, 4, 5 are also populated, though theirprobabilities are negligible. Because of the envelope function of the laserfield, the laser contains photon frequencies other than the center frequency(Figure 8.6). Transitions to other states are possible but with reducedprobability.

Observe that the third excited state (P4) has a higher probability thanthe second excited state (P3) up to t/τ ∼ 0.2, but the latter overtakesthe former afterward. This interesting interplay has to do with the cou-pling between the states and higher order effects. States n = 1 and 4 aredirectly coupled, i.e., the transition matrix element V14 is nonzero. There

Page 105: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.2. Quantum transitions and coupled channels 97

0.0 0.2 0.4 0.6 0.8 1.0t/τ

10-10

10-9

10-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

Pn

1

2

3

4

5

Figure 8.8: Occupation probabilities of states 1 − 5 for a particle in a box(a = 4) in a laser field. The laser center frequency is ωL = E2 − E1, andamplitude F0 = Fc.

are photons energetic enough to make direct, first-order transitions possiblebetween these two states, albeit with much reduced intensity. But statesn = 1 and 3 are not directly coupled, V13 = 0, so the selection rule forbidstransitions to first order.

However, as the n = 2 state is being populated, a second-order processbecomes more efficient: 1 → 2 → 3.4 The intermediate state n = 2 isdirectly coupled to the other two states. Two different photons are absorbed(Figure 8.9, left). In this way, the probability P3 grows rapidly and becomeshigher than P4 once P2 is appreciable. Final transition probabilities can beobtained after the interaction is over. At the end of the pulse, the firstexcited state has an occupation probability P2 ∼ 0.164, the ground statehas P1 ∼ 0.836, so practically these two states make up the whole wavefunction, effectively a two-state system (Section A:8.4).

In addition to the overall trend, we also see small oscillations matchingtwice the laser center frequency in the occupation probabilities. They are

4In scattering theory, the scattering operator is written as V + V GV + ..., showingthe first, second, and higher order expansions with G as the propagator. See Chapter 12,Section 12.2.2, and Eq. (12.15).

Page 106: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

98 Chapter 8. Time-dependent quantum mechanics

a result of the modulation of field intensity that varies as cos2(ωLt).

Strong fields and multiphoton excitation

Up to intermediate fields, we expect direct, first-order transitions to be themost efficient pathway to dipole-allowed states. A single photon is absorbedin first-order processes. Transitions to dipole-forbidden states, such as the1 → 3 transition seen above, must precede by a nonresonant, ladder-uppathway via the intermediate state (n = 2) which is directly coupled tothe states below (1) and above (3). Two photons of different energies areabsorbed sequentially in this second-order process.

Is it possible to have resonant transitions to dipole-forbidden states? Ofcourse, for such transitions to occur, they have to be second or higher orderprocesses. As it turns out, the answer is yes, provided the laser field isstrong enough.

6

6

i

m

f

sequential

6

6

i

m

f

resonant

Figure 8.9: Sequential, nonresonant two-photon (left) and resonant mul-tiphoton (right) excitations from state i to f . The former occurs via theintermediate state m, and the latter via a virtual state (dashed line).

Figure 8.9 (right) illustrates a resonant two-photon transition. The tran-sition is a second-order, and two identical photons are absorbed, each havingan energy 1

2(Ef − Ei). The intermediate state is a virtual state [19]. The

resonant two-photon excitation is qualitatively different than nonresonantsequential excitation, and is called multiphoton excitation.

If the fields are strong, the intensity and photon flux is sufficiently highthat multiphoton processes can be realized. Figure 8.10 shows transitionsin a strong field F0 = 5Fc. The center frequency is ωL = 1

2(E3−E1), so the

resonant multiphoton process involves two photons between states 1 and 3(see Project P8.8).

In the beginning the field is relatively weak while ramping up. Duringthis phase, direct transitions to dipole-allowed states (2 and 4) cause higher

Page 107: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.2. Quantum transitions and coupled channels 99

0.0 0.2 0.4 0.6 0.8 1.0t/τ

10-10

10-9

10-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

Pn

1

2

3

4

5

Figure 8.10: Occupation probabilities of states in a strong field, F0 = 5Fc.The laser center frequency is ωL = 1

2(E3 − E1).

occupation values than dipole-forbidden states (3 and 5), though their ab-solute magnitudes are small. As the field intensity increases, the photonflux increases quickly, and resonant multiphoton process takes over. Start-ing from the middle of the pulse, the occupation of state 3 is second onlyto the initial state. It stabilizes toward the end of the pulse ramping down,yielding a final transition probability P3 ∼ 0.352, larger than P2 ∼ 0.0462by a factor of ∼ 7. The initial state is depleted to P1 ∼ 0.601, so thetransition probabilities to the other two states (4 and 5) are several ordersof magnitudes smaller.

It is also possible to have multiphoton resonant transitions involvingmore than two photons. If the center frequency is such thatMωL = Ef−Ei

(M being an integer), we expect M-photon excitations to occur betweenstates i and f . Since this would be an even higher order process, it wouldrequire stronger fields to be effective. For ultra intense fields, the stationarystates themselves could even be modified (light-dressed states) [27].

Chapter summary

We have discussed a more accurate and versatile method, the SEO method.Its framework is less direct than the first-order split operator method, and

Page 108: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

100 Chapter 8. Time-dependent quantum mechanics

requires a basic understanding of quantum mechanics in coordinate and mo-mentum spaces and some operator algebra. However, the SEO algorithmis concise and elegant, and can be efficiently programmed using FFT. Fur-thermore, it may be readily extended to 2D or higher dimensions. Our casestudies prove that it is a powerful method for scattering problems.

For quantum transitions in external fields, the coupled channel methodusing basis expansions is most applicable. It requires a suitable basis thatcan adequately represent the possible excited states (discussed in Chap-ter A:9). However, the method works equally well in 1D, 2D, or 3D systems.We can also use this method to study coherent states and quantum revival(Section A:8.5).

8.3 Exercises and Projects

Exercises

E8.1 Let A, B be two arbitrary operators, and λ a small parameter. Toleading order in λ, show that:

(a) eλ(A+B) − eλAeλB = −λ2

2[A, B] +O(λ3);

(b) eλ(A+B)−e 1

2λBeλAe

1

2λB =

λ3

12

([[A, B], A

]+ 1

2

[[A, B], B

])

+O(λ4).

E8.2 Using Eqs. (8.36) and (8.33), fill in the steps leading to the alternativeexpression for the coupled channel method in terms of bn, Eq. (8.37).

E8.3 (a) Generate the power spectra of the laser pulse (8.41) by the FFTmethod. Choose ωL = 1, and compare τ = 1 and 5. (b) Verify thematrix elements xmn, Eq. (8.43).

Projects

P8.1 Explore the accuracy of the split-operator method in terms of spatialrange, grid size, and initial wavepacket shape.

(a) Modify Program A:8.3 to enforce the periodic boundary condition(A:8.14). First, add two statements just above line 34 to computethe correction to the first and last points in C (see Program A:8.2).Second, set the first and last points of the wave function psi to zero

Page 109: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.3. Exercises and Projects 101

right after line 34. This is equivalent to setting hard wall boundaries.Test that the modified program works correctly.

(b) Compute the errors in normalization from the modified and theoriginal codes for free fall and plot them like in Figure A:8.5, prefer-ably the absolute error |1−

∫|ψ|2dx| on a semilog scale. This involves

computing the normalization from the probability density by numer-ical integration, e.g., Simpson’s method,

import integral as itg......

cnorm = itg.simpson(pd, h)

Produce plots for the following cases: double the range to [−20, 20]only; then double the grid points N (N should be a power of 2 for bestaccuracy); finally double the width of the initial Gaussian σ. Discussand compare the results with each other.

Optionally, calculate and plot the average position 〈x〉 as a functionof time for different N , say 512, 1024, 2048, etc. Compared to theerror in the normalization in each case, are the average positions assensitive to N? Briefly explain.

P8.2 Integrate the SEO algorithm, Eq. (8.12), into the split-operator codeProgram A:8.3. If you decide to make a standalone SEO program,replace the lines A:28–A:34 by the code segment after Eq. (8.12) andkeep everything else the same. Otherwise, use a conditional switch,or define a separate function for each method, so both methods cancoexist in the same program. You need to define L, M , and N perdiscussion following the code segment. Make sure to substitute ourFFT functions with NumPy versions if the number of grid points ismoderately large, say L ≥ 7. Name your program, e.g., seo.py.

Test the code against a known case, e.g., the free fall. Reproducethe results such as Figure A:8.8. Calculate 〈x〉 and compare with thetrend-line in the plot.

P8.3 Investigate the performance and relative accuracy of the SDLF, splitoperator, and the SEO methods. Reuse programs developed in previ-ous projects as much as possible.

Page 110: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

102 Chapter 8. Time-dependent quantum mechanics

(a) Select the wedged linear potential (Figure A:8.16) as a test case.Turn off animation, plot the probability density contours with eachmethod. Ensure that the results look the same. As usual, choose aGaussian wavepacket with reasonable parameters.

(b) Profile each method (see Appendix 8.B), and compare the time ittakes to complete a given job. Comment on your observation.

(c) Check the accuracy of each method by monitoring the normal-ization. Compute the normalization

∫|ψ|2dx using Simpson’s rule.

Plot the error 1 −∫|ψ|2dx as a function of time (see Figure A:8.5).

Compare and discuss your results.

If the grid size h is halved, how do you expect the error to change?Repeat the calculation by doubling the number of grid points. Brieflyexplain the results.

P8.4 Explore scattering from a potential barrier (8.13). If you have not yetwritten a SEO program (seo.py) as described in Project P8.2, do sobefore proceeding.

(a) Once your seo.py is working properly, use it to simulate scat-tering from a potential barrier. Choose the same parameters as inFigure 8.1. Keep animation on first, watch the wavepacket approach-ing the barrier, colliding with and eventually receding from it.

(b) Calculate the transmission (Tx) and reflection (Rx) coefficientsat one energy, say E = 1

2k20 ∼ 1, assuming a = 1 and V0 = 1 for

the barrier. Evaluate the integral (8.14) after the waves have wellseparated from the barrier but before finite boundary effects causethe waves to rebound. Check your number from Eq. (8.15), the twoshould be approximately equal.

(c)∗ Calculate Tx and Rx as a function of E, and plot the results asshown in Figure 8.3. Turn off animation, and automate the process.Choose the width of the wavepacket σ ≥ 3 so as to simulate a particleof somewhat well-defined energy. Pay attention to the spatial range,as a wider wavepacket requires a larger range in order to avoid finiteboundary effects. This is especially important at lower energies. Ofcourse, always center the wavepacket away from the barrier before thecollision.

Page 111: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.3. Exercises and Projects 103

Comment on your results and any observations such as the relation-ships between E, σ, spatial range, or issues you have encountered.

P8.5 Investigate transmission and resonance scattering from a potentialwell, Figure 8.4. Either use the SEO program developed from ProjectP8.2, or create a new one as described there.

If we calculate the transmission coefficient directly from Eq. (8.14),we need to wait for the waves to pass the origin. Alternatively, wecan calculate the transmission coefficient from the probability currentJ (see Exercise A:5) as

Tx =

∫ ∞

0

J(0, t) dt,

where J(0, t) is the current at the origin. The above equation justmeasures the particle flux passing through origin. The advantage ofthis method is that we need to check J only at the origin, and whenit becomes sufficient small, we can start evaluating the time integral.

(a) Start the simulation with the parameters used to produce Fig-ure 8.5. Pick an energy, compute J(0, t) at each time step, and eval-uate the derivatives using the three-point formula (A:6.31a) (also seeProject A:P8.2). Stop the simulation once J(0, t) is decreasing andbecoming small enough. Plot J(0, t), and calculate Tx using Simp-son’s rule.

Once it works, automate the process to compute Tx at different ener-gies. Plot Tx vs E, and discuss your results.

(b) Decrease the width of the well, say a ∼ 2, such that the first-order resonance becomes effective (n = 1 in Eq. (8.16)). Repeat thecalculation. How are the resonance structures different from part (a)above?

(c) Consider scattering from the step-down potential, Figure 8.11. Fora particle with E > 0 moving from left to right, what would happenclassically? Quantum mechanically? Assume E = V0 = 1, run theprogram to calculate the transmission and reflection coefficients.

Now, suppose you keep energy fixed (say E = 1), but lower the po-tential step, i.e., increase V0. Make a prediction how the transmission

Page 112: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

104 Chapter 8. Time-dependent quantum mechanics

−V0

x

V

Figure 8.11: The step potential.

coefficient will change as V0 → ∞. Calculate the transmission coeffi-cient as a function of V0 = [1− 10] in steps of 0.2. Plot your results.Are they as you predicted? Did the particle fall off the cliff? Explain.

P8.6 Consider laser-driven transitions in the simple harmonic oscillator.For convenience, set the zero-point of the interaction potential at ori-gin, i.e., V (x, t) = eF (t)x. The required matrix elements xmn are

xmn =

√~

2meω

[√m δm,n+1 +

√m+ 1 δm,n−1

]

, (8.47)

where ω is the angular frequency of the SHO (see Exercise A:E9.10).Let ω = ωL for resonant transitions. Note that unlike the infinitepotential well, the quantum numbers m,n start from zero for SHO.

Calculate and plot the transition probabilities. Recommended param-eters are: laser amplitude, F0 = Fc; duration τ = 30; center frequencyωL = 1. Include at least five states in the simulation. Compare theSHO results with the particle in a box (Figure 8.8). What are themain differences between the two cases, especially for the higher ex-cited states? Explain.

P8.7 Even though the coupled channel method is the natural choice forstudying quantum transitions, other hybrid methods can be used aswell. In this project, use the SDLF method for the dynamic evolutionof the system and extract the transition probabilities at the end.

Apply the SDLF method (Program A:8.2) to the particle in the boxinteracting with the laser field. Instead of a Gaussian, initialize ψ(x, 0)to the ground state, u1(x), Eq. (8.39). Change the static potential Vin the function sch eqn() to the interaction potential (8.40). Use the

Page 113: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.3. Exercises and Projects 105

same parameters as in Figure 8.8. Evolve the system until the laserpulse is over.

Next, extract the occupation probability to state n by projection fromEq. (8.25). Use Simpson’s rule for numerical integration. Plot theresults as a function of time. Compare with the coupled channelresults, Figure 8.8.

What boundary condition did you use in your calculation? Is it ap-propriate for this problem?

P8.8 Study resonant multiphoton transitions of a particle in a box. Firsttest two-photon transitions to the n = 3 state. Use the coupled chan-nel code, Program 8.1, with the same parameters as in Figure 8.10.Reproduce the figure.

(a) Modify Program 8.1 to calculate the transition probabilities as afunction of the laser field amplitude F0. Vary F0/Fc from ∼ 0.1− 10,and automate the process. You may want to turn off animation. Keepthe laser duration at τ = 30 and the center frequency for two-photon1→ 3 transitions. Plot the final transition probabilities as a functionof F0/Fc on a semilog y scale. What is the threshold for F0 whenP3 = 0.01? Discuss the trend of the P3 curve. Does it follow simplescaling laws for a given region?

(b) The 1→ 2 transition is dipole allowed, but are multiphoton exci-tations still possible if the condition is favorable? Test this scenario bychoosing the laser center frequency as ωL = (E2 − E1)/M , M = 2, 3,etc., for M-photon processes. First calculate P2 as a function of F0

for M = 2, then M = 3. Compare the results, explain the difference.

P8.9 Let us study laser-atom interactions with the couple channel method.The hydrogenic atomic states are given by ψnlm(~r) = Rnl(r)Ylm(θ, ϕ),where nlm are the principal, angular momentum, and magnetic quan-tum numbers, respectively, Rnl the radial wave function (product ofa Laguerre polynomial and exponential), and Ylm the angular wavefunction (spherical harmonics). The eigenenergy depends on n only,En = −Z2/2n2, Z = nuclear charge.

Choose the laser field along z direction (linear polarization), so the in-teraction is V (~r, t) = eF (t)z. Furthermore, we set the laser center fre-quency to be resonant for 1→ 2 transitions, ωL = E2−E1, so we only

Page 114: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

106 Chapter 8. Time-dependent quantum mechanics

need to consider coupling between n = 1 (1s) and n = 2 (2s, 2p0, 2p±)states. The relevant radial and angular wave functions are

R10 = 2√Z3e−Zr, Y00 =

1

4π,

R20 =

Z3

2(1− Zr

2)e−Zr/2, Y10 =

3

4πcos θ,

R21 =

Z3

24Zre−Zr/2, Y1,±1 = ∓

3

8πsin θe±iϕ.

More radial wave functions can be generated from SymPy:sympy.physics.hydrogen.R nl.

(a) By considering only the angular part of the integral, show that theonly nonzero matrix element is z1s,2p0 =

∫ψ∗100 z ψ210 d~r. This is the

dipole selection rule.5 Therefore, we have a realistic two-state system.Let 1s (ψ100) be state 1, and 2p0 (ψ210) be state 2, evaluate z12.

(b) Simulate Rabi flopping in the hydrogen atom (Z = 1) with thefollowing laser parameters: F0 = 0.1, τ = 600. Plot the occupationprobabilities as a function of time. Read off the Rabi frequency fromthe graph. How does it compare with the theoretical value for amonochromatic laser, Eq. (A:8.47)? What step size did you use?What is appropriate?

(c)∗ Make a time sequence of contour (or surface) plots of the prob-ability density at several instants during a full Rabi cycle. Since theprobability density is three dimensional, the contour plots are slicesthrough 3D space. Due to azimuthal symmetry, we can slice in anyplane containing the z axis. The easiest would be to slice in the x-zplane, i.e., y = 0. You can generate a square grid in the x-z plane.But, you may produce better results if you first generate the grid inr-θ space and then map onto the x-z plane. See this grid generationtechnique in Program A:7.6.

P8.10 Build a SEO program in 2D. In Eq. (8.12), replace ψ(x, t0)→ ψ(x, y, t0),V (x) → V (x, y), and k2 → k2x + k2y . Program the algorithm using2D FFT. Test your program with the case study presented in Fig-ure A:8.12.

5The general dipole selection rule for unpolarized light is ∆l = ±1, ∆m = 0,±1.

Page 115: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.A. Theory of Gaussian integration 107

8.A Theory of Gaussian integration

We can use the Gaussian-Legendre integration routine (Program A:8.5 or ahigher order n) as is without understanding the beautiful theory behind it.However, the power of this methodology is its ability to change the basispolynomials appropriate to the particular type of integrals so they can bemore efficiently evaluated, e.g., Gauss-Hermite or Gauss-Chebyshev types.Read on if you encounter such a need, or if you are interested in how thiselegant approach works.

We will only discuss the essential steps here. Gaussian integration is stillinterpolatory in nature, but we interpolate using not just any polynomials,we use orthogonal polynomials. Let us introduce a completely factorizedpolynomial of degree n defined as

Pn(x) = (x− x1)(x− x2) · · · (x− xn), (8.48)

where xk are n real roots which may not be equidistant, so Pn(xk) = 0,k = 1, 2, ..., n. For now Pn(x) is arbitrary, but later on we restrict them tobe Legendre polynomials.

We expand the integrand f(x) in terms of Pn(x) at the roots xk as

f(x) 'n∑

k=1

Pn(x)

(x− xk)P ′n(xk)

fk, with P ′n = dPn/dx. (8.49)

We can verify that Eq. (8.49) passes through all data points fk = f(xk),since

limx→xk

Pn(x)

(x− xk)P ′n(xk)

= 1,

a result following the application of L’Hopital’s rule.Assuming that Eq. (8.49) is a good approximation to f(x), we can carry

out the following integration

∫ b

a

f(x)dx 'n∑

k=1

fkP ′n(xk)

∫ b

a

Pn(x)

x− xkdx. (8.50)

Define the weights as

wk =1

P ′n(xk)

∫ b

a

Pn(x)

x− xkdx, (8.51)

Page 116: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

108 Chapter 8. Time-dependent quantum mechanics

which is independent of f(x). This reduces Eq. (8.50) to

∫ b

a

f(x)dx 'n∑

k=1

wkf(xk). (8.52)

It now remains for us to decide on a polynomial Pn(x) so as to deter-mine the abscissas xk (the zeros of Pn) and the weight wk. The mostcommonly used are the Legendre polynomials in the range [−1, 1]. Givena degree n, its n zeros are symmetric about 0 and well known, and theweights (8.51) can be calculated by setting a = −1, b = 1 to give

wk =1

P ′n(xk)

∫ 1

−1

Pn(x)

x− xkdx =

2

(1− x2k)(P ′n(xk))

2, (8.53)

where we have used the property of Pn [2]

Pn(x) =x− xk

(1− x2k)P ′n(xk)

n−1∑

m=0

(2m+ 1)Pm(xk)Pm(x). (8.54)

Knowing xk and wk, we can use Eq. (8.52) to find the integral off(x) over [−1, 1]. Of course, we wish to integrate f(x) over arbitrary limits[a, b]. This can be done via a shift

Let p =b+ a

2, q =

b− a2

, x = p + qt, −1 ≤ t ≤ 1,∫ b

a

f(x)dx = q

∫ 1

−1

f(p+ qt)dt ' q∑

k

wkf(p+ qxk). (8.55)

This is the Gaussian-Legendre formula. The abscissa xk and weightwk are tabulated so we just need to look them up, or if necessary, we cancalculate them (see Program A:8.5). Since xk are symmetric about zeroand wk is the same for ±xk, usually only half, say the positive xk’s, arelisted.

Equation (8.55) is open, meaning it does not evaluate the function at theend points a, b. This is useful for functions that have integrable singularitiesat the ends. If the singularity is in the middle at c, break up the integral intotwo, one from [a, c], and another from [c, b]. If the function is only piece-wisecontinuous, we are better off with Simpson’s rule (Section A:8.A.2).

Page 117: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.B. Profiling code execution 109

8.B Profiling code execution

Sooner or later, we will come across simulations that run too slow, and wehave to optimize the program to gain speed (see Section A:1.3.2). Beforewe even attempt to try any optimization techniques, we should have a goodidea what parts of the code take the most running time and where thebottleneck is. Sometimes it is easy to guess, but it is also easy to be wrong!

There are profiling tools that take out the guesswork and tell us exactlywhere we should focus our effort on. A profiler tracks program execution,breaks down running times by each module (functions), and provides otherstatistics. Python has several profiler libraries including cProfile andprofile. They can be invoked within a program or without. We findit easier to externally profile a program by issuing the following from thecommand line,

$ python −m cProfile −−sort=cumulative barrier.py > prof.dat

The above command tells the python interpreter to run the librarymodule cProfile and sort the results by the cumulative run times. Otheruseful sorting options are calls for the number of functions calls, name forfunction names, and time for individual run times. The program to beprofiled is barrier.py, and the output is written to prof.dat.

Sample output from profiling our simulation code barrier.py for scat-tering from a potential barrier (Section 8.1) with the split evolution operatormethod is given in Table 8.1.

Table 8.1: Selective profiler output, sorted by cumulative run time.

ncalls tottime percall cumtime percall filename: lineno(function)1 0.689 0.689 246.430 246.430 barrier .py:1(<module>)

8682 239.840 0.000 242.635 0.028 fft .py:3( fft rec )4341 1.482 0.000 122.808 0.028 fft .py:17( ifft rec )

1 0.183 0.183 1.488 1.488 barrier .py:1(<module>)8682 0.018 0.000 0.395 0.000 fftpack .py:44( raw fft )4341 0.104 0.000 0.322 0.000 fftpack .py:168( ifft )4341 0.005 0.000 0.206 0.000 fftpack .py:82( fft )

Page 118: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

110 Chapter 8. Time-dependent quantum mechanics

The first part shows the run times if FFT functions from our own library(Program 5.1) are used. The function fft rec is called 8682 times, totaling242.6 seconds of run time. Because the inverse FFT ifft rec calls fft rec

to perform the core calculations, the total cumulative time for the latteris the net time spent in these two functions. Clearly, the overwhelmingamount of time is spent on the FFT computations.

The second part shows the run times when NumPy versions of FFTfunctions ifft and fft are used instead. Both apparently rely on theinternal function raw fft for their calculations. The net time is 0.53 sec-onds. So the speed gain of compiled NumPy FFT relative to Python FFTis ∼ 460. This is quite substantial. With NumPy FFT functions, we cantackle problems with large grid sizes.

In other situations, the gain may not be as spectacular, or as easy asswitching to equivalent, compiled NumPy codes. Sometimes minor tweaksmay provide enough speedup (see Section A:1.3.2). If that is not enough, wemay have to consider a whole new approach or algorithm, such as Numba(see Program A:5.7). If there is no readily available substitutes, we couldwrite our own code in Fortran and wrap it as Python callable module withF2Py, or use C/C++ extensions with Cython or Weave (see Section A:1.3.2,Program 11.3 and Program 11.4).

In any case, we think it is better to start with an inefficient but work-ing program than to have a fast but broken one. In other words, build aprogram that works correctly, apply optimization techniques only when nec-essary, and always armed with profiling information. Often, optimizationcan increase the complexity of code logic, and reduce clarity and readabil-ity. It can be a difficult decision whether to sacrifice clarity for speed,especially when the gain is down to fine-grain level, say 20%–50%. Often,it boils down to personal preference. We prefer clarity over speed at thislevel, particularly for noninteractive simulations. The code may be slower,but simplicity and clarity are important for maintenance and for others tofollow.

8.C Coupled channels in real arithmetic

In the coupled channel method (Section 8.2), even if the initial values an(0)are real, the coefficients at later times, an(t), will generally be complex.Most programming languages have built-in support for complex arithmetic.

Page 119: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.C. Coupled channels in real arithmetic 111

In Python, we have no difficulty solving Eq. (8.33) since complex numbersare supported in the same way as real numbers. It is also possible toconvert Eq. (8.33) to real arithmetic. Practical reasons for doing so mayinclude increased efficiency, or lack of native support for complex numbersin another programming language.

We illustrate here how to do the calculation with real arithmetic. Theidea is to separate the real (Rn) and imaginary (In) parts of an as in SDLF(Section A:8.2.1)

an(t) = Rn(t) + iIn(t). (8.56)

Substituting Eq. (8.56) into (8.33), and setting the respective real andimaginary parts on both sides equal, we obtain two equations involving onlyreal numbers,

dRm(t)

dt=

1

~

n

Vmn(t)[sin(ωmnt)Rn(t) + cos(ωmnt)In(t)

],

dIm(t)

dt=

1

~

n

Vmn(t)[sin(ωmnt)In(t)− cos(ωmnt)Rn(t)

]. (8.57)

In Eq. (8.57) we have used the identity exp(iωmnt) = cos(ωmnt)+i sin(ωmnt).

To be able to use a standard ODE solver, we need to store Rn and Inin one contiguous array, say y[]. If 1 ≤ n ≤ N , the array will have 2Nelements. We organize the storage as shown in Table 8.2. Accordingly, themapping is

Rn = y[n], In = y[N + n], an = y[n] + iy[N + n]. (8.58)

Table 8.2: The storage scheme of the real and imaginary parts in the array.The first half of the array y[] stores the real part Rn, and the second halfstores the imaginary part In.

Array y[] 1 2 ... N N+1 N+2 ... 2NMapping R1 R2 ... RN I1 I2 ... IN

Page 120: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

112 Chapter 8. Time-dependent quantum mechanics

Putting it all together, the CC method with real arithmetic and withm = 1, 2, ..., 2N , is

dy[m]

dt=

for 1 ≤ m ≤ N,1

~

n

Vmn(t)(sin(ωmnt)y[n] + cos(ωmnt)y[N + n]

);

for N + 1 ≤ m ≤ 2N, with k = m−N,1

~

n

Vkn(t)(sin(ωknt)y[N + n]− cos(ωknt)y[n]

).

(8.59)

Note the variable k in Eq. (8.59) is to ensure that when m > N , ωkn isgiven by ωm−N,n, because there are in reality only N physical states.

Equation (8.59) is a set of 2N ODEs that can be solved with an ODEsolver such as RK4. At any time during the laser period 0 < t ≤ τ , thecoefficients an(t) can be obtained through Eq. (8.58).

8.D Program listings and descriptions

Program listing 8.1: Quantum transition in a laser field (coupled.py)

1 import matplotlib.pyplot as pltimport numpy as np, visual as vp, ode, vpm

3

def psi(bn, a, x): # full wave function5 wf = np.zeros(len(x), dtype=complex)

for n in range(1, len(bn)+1):7 wf += bn[n−1]∗np.sin(n∗np.pi∗x/a)

return wf∗2.0/a9

def flaser (t ): # laser field11 return F0∗(np.sin(np.pi∗t/tau))∗∗2 ∗ np.cos(omegal∗t)

13 def vmat(a, N): # generate <m|V|n>vmn = np.zeros((N, N))

15 c = −8.0∗a/np.pi∗∗2for m in range(N):

17 for n in range(m+1, N, 2): # every other state is odd

Page 121: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

8.D. Program listings and descriptions 113

vmn[m, n] = c∗(m+1)∗(n+1)/((m−n)∗(m+n+2))∗∗219 vmn[n, m] = vmn[m, n] # symmetry

return vmn21

def cc box(bn, t ): # CC for particle in box23 return −1j∗(En∗bn + flaser(t)∗np.dot(vmn, bn))

25 # initializea, N = 4.0, 5 # box width, tot number of states

27 E0 = np.pi∗∗2/(2∗a∗a) # ground state energyEn = E0∗np.arange(1, N+1)∗∗2 # eigenenergies

29 F0, tau = E0/a, 30.0 # laser field amplitude, durationomegal = En[1]−En[0] # center freq = 1 −> 2 transition

31 h = 0.1/max(En) # step sizensteps, M = int(tau/h), 100 # time and space grids

33 x, z = np.linspace(0, a, M), [0]∗Mt, ta, cycle , mag = 0, [], 20, 4

35 bn = np.zeros(N, dtype=complex) # state amplitudesbn [0], pnt = 1.0, [] # init ground state , pn(t)

37 vmn = vmat(a, N) # get V matrix

39 # animation setupscene = vp.display(background=(.2,.5,1), center=(a/2,0,0))

41 wfr = vpm.line(x, z, z, vp.color .red, .02) # real partwfi = vpm.line(x, z, z, vp.color .blue, .02) # imag part

43 wf2 = vpm.line(x, z, z, vp.color .yellow, .02) # |psi|ˆ2 curveinfo = vp.label(pos=(a/2, a/2.5,0), box=False, height=20)

45

for i in range(nsteps+1): # main simulation loop47 vpm.wait(scene), vp.rate(20∗cycle)

if ( i % cycle == 0 or i == nsteps):49 ta.append(t/tau)

pnt.append(np.abs(bn)∗∗2)51 wf = psi(bn, a, x) # animation

wfr.move(x, wf.real , z)53 wfi.move(x, wf.imag, z)

wf2.move(x, np.abs(wf)∗∗2∗mag, z)55 info .text=’%3.2f’ %(t/tau)

bn = ode.RK4(cc box, bn, t, h)57 t = t + h

Page 122: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

114 Chapter 8. Time-dependent quantum mechanics

59 pnt = np.array(pnt)style = [’-’, ’--’, ’:’, ’-.’, ’:.’]

61 for n in range(N): # plot resultsplt .plot(ta, pnt [:, n ], style [n%5], label=n+1)

63 plt .legend(loc=(.85, .55))plt .ylim(1.e−10, 2.0), plt .semilogy()

65 plt . xlabel(r’$t/\tau$’), plt.ylabel(r’$P_n$’)plt .show()

The initialization block sets the parameter values, including the wellwidth, eigenenergies, laser parameters, and grids, etc. The step size h isset according to the smallest period (highest frequency) of the eigenstates.The list pnt will hold occupation probabilities as a function of time forthe N states. It is converted to an ndarray for easy manipulation afterthe loop. Following animation setup, the main loop iterates through thetime steps for the duration of the laser pulse, integrating the coupled statesvia cc box() (discussed in the text). The wave function is computed inpsi() from Eq. (8.36) and animated periodically. After exiting the loop,the occupation probabilities are plotted on a semilog y scale so the differentprobabilities by orders of magnitude are visible. Each curve is displayedwith one of five different styles and labeled by the state number. Thelegends are automatically correlated to the specified styles.

Page 123: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Chapter 9

Time-independent quantummechanics

First we discuss energy level statistics in 2D quantum systems. It will helpus study quantum chaos, i.e., quantum mechanical signatures of classicallychaotic systems such as the stadium billiards.

We use atomic units throughout.

9.1 Energy level statistics

In Section A:9.6 we have discussed the low-lying states of quantum dotswhich can be accurately portrayed if the basis is sufficiently large. We haveseen that, within a finite basis set, higher states will eventually deviate fromthe actual states. In the absence of exact results, we can ask the question:what is the extent of these states that can be trusted?

When the number of states involved is large, some statistical measuresare useful. One such measure is the quantity N(E), called the spectralstaircase, which gives the number of states below certain energy E,

N(E) =∑

n

θ(E − En), (9.1)

115

Page 124: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

116 Chapter 9. Time-independent quantum mechanics

where θ(x) is the step function defined as

θ(x) =

1, x > 0,0, x < 0,12, x = 0.

(9.2)

100 101 102 103 104

E

100

101

102

103

N(E

)

0 20 40 60 80 100E

0

5

10

15

20

25

30

35

40

N(E

)

Figure 9.1: The energy level distribution of the hexagon quantum dot.The staircase curve is from actual data and the dotted line from the Weylformula. The right panel is a zoom-in part of the figure on linear scale.

A semiclassical approximation for N(E) is given by the Weyl formula(see Ch. 16 of Ref. [18]),

N(E) =1

4π~2

(

2meAE − ~P√

2meE)

+N0, (9.3)

where A and P are the area and perimeter of the well, respectively, and N0

a constant offset.For large energies, the first term in Eq. (9.3) is dominant. We can

obtain it from phase-space state counting used in many situations. We useWigner’s idea of quanta to divide the classical phase space into basic blocksof size equal to Planck’s constant 2π~, the volume occupied by a state. Theinfinitesimal volume of phase space in f dimensions is dfp dfr, and the totalnumber of states is to leading order

N(E) ' 1

(2π~)f

dfp dfr. (9.4)

Page 125: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

9.2. Quantum chaos 117

In one dimension, Eq. (9.4) yields pL/2π~, and in two dimensions it isN(E) ∼ πp2A/(2π~)2. Upon replacing p2 = 2meE, we arrive at the firstterm in Eq. (9.3). The other two terms cannot be obtained this way. In thefollowing, we will use Eq. (9.3) but drop the constant offset N0.

We show the energy level distribution of the hexagon quantum dot (Sec-tion A:9.6.3) in Figure 9.1. The staircase curve is is calculated from Eq. (9.1)using numerical data and plotted with

plt .step(E, range(len(E)))

where E contains the eigenenergies. The calculation is done using 9600elements yielding a total of 4680 states. The left panel in Figure 9.1 showsthat the numerical data is well described by the Weyl formula up to E ∼500, or about 300 states. We are not concerned about the discrepancy atthe very low-E end since the Weyl formula does not apply there. As longas E < 500, the agreement remains good (see the blowup figure), and wecan trust these states.

For E > 500, the number of states increases linearly in the Weyl for-mula, while the numerical result plateaus off. As discussed elsewhere (Sec-tion A:9.6.2), in trying to mimic the infinite dimensions in the Hilbert spacewithin a finite representation, energy must increase at a faster rate. We needto treat these higher states with care.

9.2 Quantum chaos

When we speak of quantum chaos, it is perhaps not the most accurate termto use. After all, we defined chaos in Chapter A:5 in classical mechanicsin terms of extreme sensitivity of trajectories to nearby initial conditions.Quantum mechanically, the notion of trajectories is replaced by wave func-tions. The latter can even exhibit periodic revival as we have seen in Sec-tion A:8.5. Nonetheless, the behavior of quantum systems can be differentdepending on whether their classical behavior is regular or chaotic. Byquantum chaos, we mean quantum behavior of classically chaotic systems.

9.2.1 The quantum stadium billiard

We encountered the classical stadium billiard in Section 5.1. It was shownthat it was chaotic if the semicircles at the ends had a finite separation. We

Page 126: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

118 Chapter 9. Time-independent quantum mechanics

will discuss quantum behavior of this system.

−1.0 −0.5 0.0 0.5 1.0x

−1.0

−0.5

0.0

0.5

1.0

y

−1.5 −1.0 −0.5 0.0 0.5 1.0 1.5x

−1.0

−0.5

0.0

0.5

1.0

y

Figure 9.2: Sample meshes for the circle and stadium billiards.

As shown in Figure 5.3, the stadium billiard consists of a rectanglesandwiched between two semicircles at the ends. To treat it as a quantumdot with FEM, we first set up the meshes shown in Figure 9.2. The interioris made of right triangles, while the perimeter area is made of irregulartriangles. This has to do with the curved boundaries and the way wegenerated the mesh.

We started with an octagon whose incircle is a unit circle, and stretchedthe two halves apart a distance d to form an elongated octagon stadium.Right triangular meshes are generated to fill the stadium. Finally, the nodesoutside the unit circle are scaled so they fall on the circle. As a result ofscaling, the final mesh does not preserve the full rotational symmetry in thecase of the circular stadium (Figure 9.2, left). For instance, it is symmetricunder rotations of 90 but not 45.

We list in Table 9.1 the first several eigenenergies for the circle (exactand numerical) and stadium billiards. The hexagon results are also includedfor comparison. Because the energies are roughly inversely proportional tothe area of a given system, the eigenenergies of the stadium and hexagonbilliards have been scaled by the ratio of their area to the area of the unitcircle, A/π. We can see from Table 9.1, this has the effect that the energiesfor a given state are comparable to each other regardless of the system.

In the calculation we used 2576 and 3896 elements for the circle andstadium billiards, respectively, resulting in 1209 and 1845 total states for thesystems. The mesh sizes are similar to each other, and the extra elements

Page 127: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

9.2. Quantum chaos 119

Table 9.1: The lowest 12 eigenenergies of select quantum dots. The energiesof stadium (d = 1) and hexagon have been multiplied by the ratio of theirarea to the area of the circle, A/π.

State Circle (Exact) Circle Stadium Hexagon1 2.89159 2.89427 2.95922 2.959532 7.34099 7.36328 6.71831 7.502673 7.34099 7.36328 8.31376 7.502674 13.18731 13.25257 12.44621 13.435715 13.18731 13.27413 13.16746 13.435716 15.23563 15.33934 17.05416 15.525227 20.35323 20.53756 19.68938 19.731138 20.35323 20.53756 20.06371 21.810389 24.60923 24.88183 23.65691 24.9115610 24.60923 24.88183 28.32695 24.9115611 28.79147 29.15916 28.35693 29.0822212 28.79147 29.16991 29.03512 29.08222

for the stadium is due to its larger area. Compared with the exact results(Exercise E9.5), the error of the numerical results is on the order of 10−3−10−2 for the states shown. The circle billiard has degenerate states (as wellas the hexagon), but there is no degeneracy in the stadium billiard due tothe lack of symmetry.

For the circle billiard, the numerical values for the first and several otherpairs of degenerate states (3 and 4, 7 and 8, etc.) are identical to each other,but the other pairs (4 and 5, 11 and 12) are no longer so, in contrast tothe hexagon where all degenerate pairs are perfectly equal. As discussedearlier, our mesh for the circle billiard does not preserve the full symmetry ofrotation, except for 90 rotations. The first pairs of the degenerate states arerotated 90 relative to each other, so their eigenenergies are exactly equal.We have not graphed the wave functions of the circle billiard, but as you mayhave guessed, they are very similar to those of the hexagon (Figure A:9.19).The second pair of degenerate states are rotated 45 off each other, andtheir energies are not identical because this rotational symmetry is absentin our mesh. We can make a mesh that preserves this symmetry, see ProjectA:P9.9.

The wave functions of the stadium billiard is shown in Figure 9.3. We

Page 128: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

120 Chapter 9. Time-independent quantum mechanics

Figure 9.3: The wave functions of the stadium billiard (r = d = 1).

see the general pattern of intertwined peaks and valleys as the state numberincreases. Comparing them with the wave functions of the hexagon system(Figure A:9.19), we recognize the first three states are almost mirror imagesof each other, having the same number of extrema. The only difference isthat the stadium boundary deforms the shapes of the peaks and valleys.

The big difference occurs at state 4, where the stadium has three ex-trema, which is all together missing in the hexagon. It cannot happen inthe hexagon because its symmetry does allow such a pattern to exist in

Page 129: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

9.2. Quantum chaos 121

two degenerate states. As a compromise, this pattern is skipped in favor offour extrema, making two degenerate states possible. We can also observea couple of interesting features in Figure 9.3, including one where a hori-zontally placed pattern has a lower energy than a vertically placed patternfor the same number of extrema, e.g., 2 and 3, 4 and 8, etc. However, ifthis is true, what about states 5 and 6? We leave the answer and furtherexploration of other features to the interested reader.

9.2.2 Signature of quantum chaos

Energy level distribution

We can examine the signature of quantum chaos in the energy level distri-bution and in the wave function. Let us begin with one of the most directsignatures, the nearest neighbor spacing, NNS.

When there are many energy levels to consider, we rely on energy levelstatistics as an important analysis tool. For instance, the Weyl formula(9.3) (Figure 9.1) is useful for judging what portion of the spectrum istrust-worthy. It turns out that it is also useful in NNS analysis as we willsee below.

The NNS analysis tells us how the gaps between adjacent energy levelsare distributed. Suppose there is an energy level at E. Let P (s) ds bethe probability of finding the next energy level from E + s to E + s + ds,where 0 ≤ s < ∞ is the NNS. Then, P (s) ds will depend on two factors:the probability that there is yet no energy level between [0, s] and theconditional probability that there is one energy level in ds but none in[0, s]. Specifically, we can express this statement as

P (s) ds =

(

1−∫ s

0

P (t)dt

)

Q(s) ds. (9.5)

The term in the parenthesis is equal to∫∞sP (t)dt, i.e., the probability that

an energy level exists in the range [s,∞], assuming P (s) is normalized. Theother term, Q(s) ds, is the conditional probability that a level has not beenfound in [0, s] but will be in [s, s+ ds].

The solution to Eq. (9.5) is

P (s) = CQ(s) exp

(

−∫ s

0

Q(t)dt

)

, (9.6)

Page 130: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

122 Chapter 9. Time-independent quantum mechanics

where C is a normalization constant (see Exercise E9.6).

The specific form of the NNS distribution P (s) will depend on the choiceof Q(s). If we assume the simplest constant function, Q(s) = λ, we obtainthe Poisson distribution,

P (s) =1

se−s/s. (Poisson) (9.7)

If we assume a linear function, Q(s) = λs, we have the Wigner distribution,

P (s) =πs

2s2e−πs2/4s2. (Wigner) (9.8)

Both distributions are normalized, and s =∫sP (s)ds is the average spac-

ing. The biggest difference between them occurs at s = 0 where the Poissondistribution is maximum at 1 and the Wigner distribution is zero. So thePoisson distribution describes energy level attraction and the Wigner dis-tribution describes energy level repulsion. However, an actual distributionis not necessarily pure, and there are other possible distributions [5].1

Spectrum unfolding

We have calculated the energy levels of quantum dots, so we can study theNNS distributions. But first, we need to “unfold” the spectrum in order tocompare with the distributions given by Eqs. (9.7) and (9.8). The reasonfor unfolding is that, according to random matrix theory [5], the energylevel distribution has small fluctuations. In order to see these fluctuations,we need to calculate NNS distributions on a scale comparable to the meanspacing s. Without unfolding, the raw spectrum has wildly different gaps,and is not suitable for NNS analysis.

We illustrate graphically the process of spectrum unfolding in Figure 9.4.The full staircase curves for the circle and stadium billiards are shown in theleft panel. Also shown are the results of the Weyl formula (9.3) as smoothdotted curves. Judging from the agreement between the two curves, weinfer that we can trust up to 150 − 200 energy levels with confidence. Wewill use 150 in the following.

1Some general properties can be studied without specific knowledge of the Hamil-tonian, assuming only certain symmetries and forms of the Hamiltonian matrix. In therandom matrix theory, for example, one assumes random matrix elements.

Page 131: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

9.2. Quantum chaos 123

100 101 102 103 104

E

100

101

102

103

N(E

)

circle

0 10 20 30 40 50 60 70 80E

0

5

10

15

20

25

30

35

N(E

)

circle

100 101 102 103 104

E

100

101

102

103

N(E

)

Stadium d=1

0 10 20 30 40 50E

0

5

10

15

20

25

30

35N(E

)

Stadium d=1

Figure 9.4: Spectrum unfolding for the circle and stadium billiards (r = 1).

The right panel in Figure 9.4 shows the unfolding process on a magnifiedscale. First we locate the actual energy E on the Weyl curve. They arethe open symbols. Next, we find the unfolded energy, E, by projecting tothe vertical axis, E = N(E). These are the filled symbols. In the examplesshown, there are about ∼ 35 unfolded “energies” each, having a mean NNSvalue s ∼ 1, despite the unequal actual energy ranges. By unfolding, weeffectively transform energy levels at different scales to a comparable scalewith unity average spacing.

Page 132: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

124 Chapter 9. Time-independent quantum mechanics

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0s

0.0

0.2

0.4

0.6

0.8

1.0

1.2

P(s

)circle

Poisson

Wigner

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0s

0.0

0.2

0.4

0.6

0.8

1.0

P(s)

d=0.5Poisson

Wigner

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0s

0.0

0.2

0.4

0.6

0.8

1.0

P(s)

d=1Poisson

Wigner

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0s

0.0

0.2

0.4

0.6

0.8

1.0

P(s)

d=2Poisson

Wigner

Figure 9.5: Nearest neighbor spacing of the stadium billiard (r = 1).

Nearest neighbor spacing distributions

Once we have unfolded the spectra, we can plot the NNS distributions ashistograms. Figure 9.5 displays the results obtained with Program 9.1 forthe stadium billiards. For comparison, the Poisson andWigner distributionsfor s = 1 are also graphed.

The total number of states ranged from∼ 1200−2500. We use the lowest150 states in each case for the normalized histograms. The histogram forthe circle billiard peaks at s = 0, closely matching the Poisson distribution.As the value of d is increased, the maximum for the stadium billiard startsto move to higher s, though it is still at s = 0 for d = 0.5. For larger d = 1and 2, however, the maximum has moved to s > 0, and the distributionsresemble more like the Wigner distribution. They are not pure Wignerdistributions for sure, but they are certainly not Poisson. If we assumeeach distribution is a linear superposition of both distributions, it is clearthat as d increases, the weight of the Poisson distribution decreases, and

Page 133: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

9.2. Quantum chaos 125

the weight of the Wigner distribution increases.

From our discussion in Section 5.1.2, we know that the classical motionis regular in the circle billiard, but chaotic in the stadium billiard (d 6= 0).Furthermore, the chaoticity increases as d increases (to a limit). Figure 9.5shows that regular motion (circle billiard) corresponds to the Poisson NNSdistribution. The increased Wigner component in the histograms in Fig-ure 9.5 confirms the correlation between classical chaoticity and the WignerNNS distribution. Though the stadium billiard histograms do not have azero value at s = 0, other quantum dots with more irregular shapes canachieve P (0) ∼ 0, giving a purer Wigner distribution [22] (also see ProjectP9.5 and Project P9.6).

Scar in the wave function

We can also find signature of quantum chaos in the wave function. Forhigher states, sometimes we can identify classical trajectories from pocketsof highly localized density, or scars, in the wave function.

Figure 9.6 shows the wave functions of the stadium billiard (d = 1) ofseveral higher states. The left panel shows a few complex but interestingpatterns we have come to expect from quantum dots. They do not haveclearly identifiable classical trajectories. However, the right panel showspatterns where classical trajectories can be easily identified. They are shownas white lines. The first two trajectories (58 and 145) describe a particlebouncing up and down, and left and right. The third trajectory is like abow tie. The scarring in state 58 is less localized, but more localized verticalscars, including multiple bounces, can also be found.

These classical trajectories are unstable periodic orbits [20]. Neverthe-less, it is remarkable that the wave function “knows” about them, andforms scars around them. The NNS distribution lets us characterize quan-tum chaos statistically, while the scarring of the wave function enables usto examine it individually.

Page 134: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

126 Chapter 9. Time-independent quantum mechanics

Figure 9.6: The higher wave functions of the stadium billiard (d = 1).

9.3 Exercises and Projects

Exercises

E9.1 The analytic wave function for hydrogen is

unl = Crl+1L2l+1n−l−1(2r/na0) exp(−r/na0),

Page 135: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

9.3. Exercises and Projects 127

where L2l+1n−l−1(x) is the Laguerre polynomial, C a normalization con-

stant, and a0 the Bohr radius.

Use the special value Lk0(x) = 1 to show that for circular states the

probability density peaks at r = n2a0.

E9.2 Because the atomic orbital size scales like n2, it becomes increasinglyinefficient to integrate the radial Schrodinger equation on a lineargrid. In addition, the wave function changes most rapidly near smallr (Figure A:9.12). The linear scale places many grid points at larger, which is slow and wasteful.

A simple but effective solution is to use the logarithmic scale. Such ascale stretches the small r region and compresses the large r region.Let x = ln r

a0. Rewrite Eq. (A:9.39) in terms of the new variable x.

Note that the new equation contains the first and the second deriva-tives. The first derivative prevents the straightforward application ofNumerov’s method.2 However, another substitution gets rid of thefirst derivative. Let w(x) = u(x)√

r. Show that in terms of w and x, the

radial equation satisfies the standard Numerov form w′′ + f(x)w = 0,with

f(x) =2mer

2

~2

(

E − V (r)− (l + 12)2~2

2mer2

)

.

Note that in the centrifugal potential, the term l(l + 1) has beenreplaced by (l + 1

2)2.

E9.3 Obtain the matrix representation in the 1D FEM basisOij =∫ϕiOϕjdx

where O is one of the following operators: position x, position squaredx2, and momentum p.

E9.4 Evaluate the FEM overlap matrix in Eq. (A:9.48) by analytically in-tegrating Be

ij =∫

Seϕeiϕ

ej dxdy.

(a) Show that for an isosceles right triangle of base length h,

Beii =

h2

12, and Be

ij =h2

24(i 6= j);

2There are techniques to generalize the standard Numerov method so as to allow thepresence of first derivatives. We will not pursue them here.

Page 136: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

128 Chapter 9. Time-independent quantum mechanics

and for an equilateral triangle of side length h,

Beii =

√3

24h2, and Be

ij =

√3

48h2 (i 6= j).

Explain why the diagonal and off-diagonal values for the isosceles righttriangle are the same regardless of nodes.

(b)∗ Derive the FEM overlap matrix, Eq. (A:9.49). Verify that it givescorrect results for isosceles triangles.

E9.5 Let zmn be the nth zero of the Bessel function Jm(z). The eigenener-

gies of circular well of radius a is given by Emn = ~2z2mn

2mea2, with m as

the angular momentum quantum number.

(a) The SciPy function jn zeros(m, nt) returns the first nt zerosof the Bessel function Jm(z). Use this function to find the first 12

eigenenergies of a circular well with radius a =√32

and 1, respectively,for m = 0. Compare with the eigenenergies of the hexagon well.

(b) Prove the analytic result for Emn.

E9.6 (a) Derive the NNS solution (9.6) from (9.5). Let P (s) = Q(s)F (s),and solve for F (s) by differentiating both sides with respect to s.

(b) Verify that Poisson and Wigner distributions (9.7) and (9.8) arenormalized, and the average spacing is s in each case.

E9.7 Plot the NNS distribution of the hexagon billiard. Is the systemclassically regular or chaotic?

E9.8 Use the supplied mesh data to compute the energy levels of the sta-dium billiard for d = 3, 4, and 5. Compare them with the Weylformula, and determine the range of trustable states.

Predict the NNS distributions and sketch them. Plot the actual NNSdistributions. Discuss your results against your predictions and Fig-ure 9.5. Were they as expected? Explain.

Projects

P9.1 We will investigate the properties of the δ molecule below.

Page 137: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

9.3. Exercises and Projects 129

(a) Reproduce the energy-level diagram in Figure A:9.7 from Eq. (A:9.25)for the symmetric Dirac δ molecule. Use the Lambert W functiongiven in Program A:3.4, make sure to use only branch 0, and checkthe returned value is not None caused by invalid input.

(b) Study the δ molecule using FEM. Modify the program from ProjectA:P9.6 or directly from Program A:9.3 such that the potential matrixhas two nonzero elements at the nodes ±a

2. Choose α = 1, and vary a

between 0 and 2. Verify the existence of the critical distance, ac, andthat your energies are correct.

(c) Plot the wave functions at a = 0.8ac (one state) and 1.2ac (twostates). Based on their shape, make a prediction about the shape ofthe momentum wave functions, φ(k). Sketch them. Modify the pro-gram to calculate the momentum wave functions with FFT. Plot themomentum distributions |φ(k)|2. Discuss and compare your resultsand predictions.

(d) Calculate the expectation values of kinetic and potential energiesat a = 1.2ac. Compare and comment your results between the twostates. These results are exact within FEM.

(e)∗ Approximate the above results using the LCAO method. Firstobtain the single δ-atom wave function; then form the LCAO statesu± from Eq. (A:9.7), making sure to cut them off at the boundariesand to properly normalize them; and finally compute the expectationvalues from Eq. (A:9.19) using u±. Discuss the accuracy of this ap-proach. Under what condition will LCAO be more accurate? Testyour hypothesis.

(f) Study the asymmetric potential, e.g., choose α = 1, β = 2. Drawan energy-level diagram. Calculate energies from FEM and fromEq. (A:9.24). Compare the results. Plot the wave functions at 1.2ac.How are they different than the symmetric case?

(g)∗ Study multi-δ molecules. Change your program so multiple δcenters can be accounted for in your code. Compute the energy-leveldiagrams for 4-δ atoms (α = 1) separated by a = 1

2, 1, and 2. Compare

with the diagram in Figure A:9.4, and with analytic solutions for theDirac comb.

P9.2 Apply the BEM method to study systems in half-open spaces.

Page 138: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

130 Chapter 9. Time-independent quantum mechanics

(a) Consider the hard wall plus the linear potential shown in Fig-ure A:9.9. First, use Program A:9.4 as is, but delete from the kineticand potential matrices the rows and columns corresponding to theeven states before diagonalization. Code wise, this can be accom-plished via np.delete() function (see Program A:7.3) as

even = range(0, N, 2) # even indicesTm = np.delete(Tm, even, 0) # remove rowsTm = np.delete(Tm, even, 1) # remove columns

Do the same for the potential matrix. Put these statements rightafter the double loop before calling eigsh(). Analytically, the kineticenergy matrix can be obtained from Eq. (A:9.36) by removing rowsand columns due to even states: 1, 3, 5, etc. After removal, the newmatrix is tridiagonal,

T =~ω

4

3 −√6

−√6 7 −

√20

−√20 11

. . .. . .

. . .

. (SHO basis, half space)

(9.9)Print the first few rows and columns of the kinetic energy matrix tomake sure that they agree with Eq. (9.9).

You should obtain energies corresponding to n = 2, 4, 6 shown inTable A:9.2. Confirm that your results agree for more eigenenergies,say 10, with the analytic values determined by the zeros of the Airyfunction, Eq. (A:9.63).

Why did the code work? What happened to the normalization con-stant?

(b) The above approach is wasteful. To make the code more efficient,modify the program to generate the matrices directly, skipping theeven states in the double loop. Verify that the it works correctly. Asa further check, apply the program to calculate the energies for theSHO in half space, V (0) =∞, V (x) = 1

2x2 for x > 0.

Increase the number of basis states N to 40, 80, etc. For higher N val-ues, you may encounter difficulties with SciPy function eval hermite(n,y),

Page 139: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

9.3. Exercises and Projects 131

which at present works to about n ∼ 30. For n > 30, use the recur-rence formula (A:9.34) to write your own Hermite function.

At higher n, we need to be careful about numerical integration becausethe basis functions become more oscillatory and expand outward. Ex-periment with the integration limit in the evaluation of the potentialmatrix, e.g., increasing or decreasing it, breaking it into two parts,etc. You should observe that the results could be wrong unless theseintegrals converged.

P9.3 Study the energy levels of bound states in the Morse potential, Eq. (A:9.37).(a) If Project P9.2 had been assigned already, skip this part and usethe BEM program developed. Otherwise, follow the instructions giventhere to write and verify a BEM program using SHO basis for half-open spaces.

(b) Apply the code to calculate the eigenenergies in the Morse poten-tial. Use the following parameters valid for the H2 molecule: V0 =0.174, r0 = 1.4, α = 1.44.

Two other parameters for the basis set must be adjusted as well. Thefirst is the mass me in Eq. (A:9.33), which should be changed to thereduced mass of the nuclei. For H2, the nuclei are two protons withmass Mp = 1, 836me. Replace the electron mass by the reduced massme → 1

2Mp = 918me.

The second parameter is ω, which should be appropriate to the vi-brational level spacing of molecules. Argue, based on the energy-leveldiagram in Figure A:9.10, that ~ω should be on the order of a fractionof an eV. Convert it to atomic units.

Set N = 40, vary ω from 0.001 to 0.005. For each ω, calculate thebound states energies. Compare with the analytic results [25]

En = −V0(

1− α~(n+ 12)√

2µV0 r0

)2

,

where µ is the reduced mass, and n = 0, 1, 2, ..., nmax, until the valueinside the parenthesis turns negative (nmax = 17 in this case). Com-pute the relative error, and note what ω values give the best accuracyfor the lower states and for the higher states. They are not necessarilythe same.

Page 140: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

132 Chapter 9. Time-independent quantum mechanics

(c) Increase N , say to 80, 100, and try to obtain all the bound stateenergies. Make sure you pay attention to the convergence of numericalintegration per discussion in Project P9.2. Generally, use a higher ωfor the full spectrum. Plot your results as presented in Figure A:9.10,converting to eV and A scales as shown.

(d) Find from literature the appropriate Morse parameters for yourfavorite molecule, say N2 or HCl. Calculate its vibrational energylevels, and compare with available data if possible.

(e)∗ Another empirical internuclear potential popular among chemistsis the Lennard-Jones potential, Eq. (A:11.36). The parameters V0 andr0 play the same role as in the Morse potential. Calculate the energylevels using the same parameters for H2. Compare and discuss theresults between the two potentials. What part of the spectrum is thedifference most prominent? Why?

P9.4 Consider the 1D hydrogen atom with the Coulomb potential

V (x) = − Z

|x| .

The Schrodinger equation is the same as the radial equation of the3D hydrogen atom for l = 0, Eq. (A:9.39). Thus, the eigenenergiesand wave functions of the ns states, shown in Figure A:9.11 and Fig-ure A:9.12, respectively, are also solutions to the 1D hydrogen atom,in a given half space. However, the potential spans the whole space.The 1s wave function is zero at the origin. Taking it to be odd overthe whole space, we have a ground state with one node, not zero aswe normally expect. Can this be true?

(a) Investigate the energy levels of the 1D hydrogen atom with theshooting method, Program A:9.2. You immediately notice that thereis a problem: the potential is singular at origin, in the middle of thedomain. To overcome the problem, use a modified form

V (x) = − Z

|x|+ ε,

so the potential remains finite at x = 0. As ε→ 0, it should becomea good approximation to the actual Coulomb potential.

Page 141: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

9.3. Exercises and Projects 133

Calculate the energies and plot the wave functions for a series of ε, e.g.,0.1, 0.01, 0.001, etc. Set Z = 1. Depending on ε, you need to startthe search from a lower energy than the default in Program A:9.2. Ineach case, you should find a ground state with zero node. Adjust therange and increase the number of grid points as necessary to makethe wave function smooth near the origin. What is the trend of theeigenenergies, particularly ground state energy, as ε decreases? Whatabout the wave functions?

What is your conclusion with regard to the ground state when ε→ 0?

(b) Re-investigate the problem using the FEM Program A:9.3. Is theabove conclusion upheld?

(c) If the mixed boundary condition FEM has been formulated perProject P9.7, apply it to solve the problem in the half-open space0 ≤ x ≤ R. Use the boundary condition u′(0) = 0 and u(R) = 0 tostudy even states, and u′(0) = 1 and u(R) = 0 to study odd states.Compare your findings in all three parts.

Note we can even set ε = 0 without seemingly causing apparent nu-merical problems. Try it. Have we resolved the singularity?

P9.5 Investigate the equilateral triangle quantum dot. Assume unit sidelength. Generate a mesh made up of equilateral triangles to preservethe symmetry. You can do so by modifying Program A:9.9, scalingthe one shown in Figure A:9.15, or a method you devise. The meshsize should be such that 20 or mode nodes are on each side.

(a) Calculate the eigenenergies and compare with the values of thehexagon, Table 9.1. Make sure to scale them by the area. Pick outthe states with identical energies after accounting for the area.

(b) Predict what the first several wave functions look like. Plot thefirst 12 wave functions. Search higher states for possible scarring inthe wave function.

(c) Analyze the NNS distribution, compare with both the Poisson andWigner distributions. Is the system classically chaotic?

P9.6 Figure 9.7 shows several quantum dots including the quarter stadium,an arbitrary triangle, a corner dot, and a hollow billiard.

Page 142: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

134 Chapter 9. Time-independent quantum mechanics

r

d

(a) (b)

(c) (d)

Figure 9.7: Quantum dots of various shapes.

Pick one system, or two if in a team project, and generate an ap-propriate mesh for the system. Plot the mesh, keep the number ofelements low for clarity. Does it preserve the symmetry, if any?

(a) Calculate the energy levels. The number of elements should belarge enough that at least 1000 states are included. Predict whetherthe system is chaotic. Determine the number of trustable states, andplot the NNS distributions. Discuss your results.

(b)∗ For the system picked, simulate the classical ballistic motion asdescribed in Section 5.1 (use the code from Project P5.4 if available).Plot the trajectories and Poincare maps similar to Figure 5.4. Arethe quantal (NNS) and classical (Poincare map) results consistent?

P9.7 We have used FEM to solve the Schrodinger equation with Dirichletboundary conditions. But with a slight change, it can work withNeumann boundary conditions, as well as mixed boundary conditions.This can be useful in certain situations, e.g., at hard-wall boundarieswhere the wave function vanishes, but not necessarily its derivatives.

(a) Formulate the 1D FEM, assuming Neumann boundary conditions,u′(a) and u′(b). Starting from Eq. (A:9.13), derive the eigenvalueequation analogous to (A:9.17), taking into account the values of qkat the boundaries which act as constraints. Test your formulation for

Page 143: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

9.A. Program listings and descriptions 135

a particle in a box with u′(a) = u′(b) = 0. What subset of states arerepresented by the boundary condition?

(b) Modify your formulation to allow mixed boundary conditions, e.g.,u′(a) and u(b). Test your results with u′(a) = 1, u(b) = 0.

Apply your method to the 1D hydrogen atom, Project P9.4.

(c)∗ Starting from Eq. (A:9.42), derive the 2D FEM formulation thatallows mixed Dirichlet and Neumann boundary conditions, u(xk, yk)and ∇u(x′k, y′k), where (xk, yk) and (x′k, y

′k) are on the boundary. Val-

idate your formulation on the 2D infinite potential well (width a,height b) with suitable boundary conditions. For example, u(0, y) =u(a, y) = 0, and ∇u(x, 0) = ∇u(x, b) = 0. Discuss the results againstanalytical solutions.

9.A Program listings and descriptions

Program listing 9.1: Nearest neighbor spacing (nns.py)

1 import numpy as np, pickleimport matplotlib.pyplot as plt

3

def weyl(E):5 return (2∗area∗E − peri∗np.sqrt(2.∗E))/(4∗np.pi)

7 pi , r , d = np.pi, 1.0, 1.0 # billiard param.area = pi∗r∗r + 2∗r∗d # area, perimeter

9 peri = 2∗pi∗r + 2∗d

11 file = open(’eigendata.dat’, ’r’) # eigendata fileE, u = pickle.load( file ) # read pickled eigendata

13 file . close ()

15 NE = weyl(E) # unfold ENNS = NE[1:] − NE[:−1] # nearest neighbor spacing

17

M, N = 150, 15 # M = cutoff state, N = bins19 s = np.linspace (0., 4., 40)

poisson = np.exp(−s)

Page 144: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

136 Chapter 9. Time-independent quantum mechanics

21 wigner = (pi∗s/2)∗np.exp(−pi∗s∗s/4.)

23 plt . figure ()plt . hist (NNS[:M], N, color =(.8,.8,.8), normed=True) # histogram

25 plt .plot(s , poisson, ’-’, label=’Poisson’)plt .plot(s , wigner, ’--’, label=’Wigner’)

27 lg = plt.legend()lg .draw frame(False), plt .xlim(0,4.)

29 plt . xlabel(’s’, size=20), plt . ylabel(’P (s)’, size=20)plt .show()

The program generates nearest-neighbor spacing (NNS) histograms fromenergy levels previously calculated and stored in a file by Program A:9.7.The variables area and peri should be modified appropriately for quantumdots other than stadium billiards.

The key part of the code is the two lines (15 and 16) unfolding thespectrum and calculating the NNS. Once they are obtained, they are plottedas a normalized histogram (line 24). Note that the legend frame is turnedoff.

Page 145: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Chapter 10

Simple random problems

We describe a few additional simple random systems such as the game of lifeand a hybrid traffic flow model. As an illustration of how complexity arisesfrom simplicity, we discuss ants raiding patterns which exhibit complexphenomena while operating under simple, basic rules.

10.1 Game of life

We discuss an interesting system called the game of life. It is a represen-tative example of cellular automata where each cell has a finite number ofstates and interacts with other cells via simple rules. They can be a usefulmodel in physics and related fields to study how complexity arises fromsimplicity.

The game of life is a mathematical game simulating the life cycle of cellsobeying a simple set of rules [12]. The cells, conceptually laid over a boardwith square grids (Figure 10.1), are in one of two states, dead or alive.Given an initial configuration, the fate of each cell in the next generation(iteration) is determined by its interaction with the surrounding neighbors.

Except on the boundary, each cell in the game of life has eight sur-rounding neighbors including diagonal ones. The cells advance to the next

137

Page 146: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

138 Chapter 10. Simple random problems

Figure 10.1: The game of life after 3 iterations from three initial configura-tions (left-most columns).

generation simultaneously. The state of a cell in the next generation isdetermined according to the following basic rules.

• A living cell will stay alive if it is surrounded by two to three livingcells (ideal)

• A living cell will die if the number of living neighbors is one or less(isolation), or four or more (overcrowding)

• A dead cell will become alive if surrounded by exactly three livingcells (revival)

Figure 10.1 illustrates three select cases from Program 10.1. Each ison a 5 × 5 grid propagated over four generations. Filled and empty cellsrepresent living and dead cells, respectively. The first case (top) dies outby the fourth generation. This is a common occurrence. The second case(middle) survives in a stable 4-cell block (period one). The third case(bottom) also survives, but oscillates between a horizontal and vertical bar(period two).

Page 147: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

10.2. Traffic flow 139

Figure 10.2: The game of life after 11 iterations from a single initial config-uration (upper left). The group survives as a period-two pattern.

There are multiple patterns for a given period. The period-one squarepattern (Figure 10.1, middle row), though simplest and most frequent, isnot unique. Neither is period-two, as illustrated in Figure 10.2, showinganother configuration that survives as a stable, period-two pattern.

For larger grids, more complex patterns and higher periods can be ob-served. Many of the surviving patterns consist of basic stable units such asthose shown in Figure 10.1. Some configurations take a long time to settleinto stable patterns, if at all. In fact, the destinies of certain configurationsare not known even after a great number of iterations. Clusters of livingcells form, grow, or disassociate between iterations.

10.2 Traffic flow

Traffic flow is another example where individual actions can cause collectiveor critical effects and even chaotic dynamics [23]. Traffic may be studiedmicroscopically using the behavior of individual vehicles, macroscopicallyfocusing on average or statistical parameters, or with a hybrid model com-bining both. We briefly discuss the latter with a normal speed distribution.

Page 148: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

140 Chapter 10. Simple random problems

We assume a number of vehicles moving on a stretch of highway whichhas a speed limit vmax. Let φ be the traffic flow (flux) defined as the numberof vehicles passing a given point per unit time. The flow depends on theaverage speed v and the density λ measuring the number of vehicles perunit length,

φ = vλ. (10.1)

The ideal situation is to increase traffic flow as much as possible withingiven constraints such as safety.

The general relationship between v and λ is varied. But empirical ob-servation tells us that average speed decreases with increasing density. Tofirst order, we can approximate the dependence as linear

v = vmax

(

1− λ

λmax

)

, (10.2)

where λmax is the maximum density. This parameter depends on the averagevehicle length and stopping distances, etc.

Combining Eqs. (10.1) and (10.2), we obtain

φ = λmaxv

(

1− v

vmax

)

. (10.3)

This is a parabola (line in Figure 10.3, left), and is known as the Green-shields fundamental traffic diagram. The diagram will change if we assumea different relationship than Eq. (10.2).

Even given Eq. (10.2), the actual speed will have a distribution. Weassume the speed probability to be a Gaussian distribution (normal)

P (v) =1

σ√2π

exp

[

−(v − v)2

2σ2

]

. (10.4)

The σ parameter is the standard deviation, a measure of how narrow thedistribution is.

The Gaussian distribution is a standard probability function, and isgiven as gauss(mu, s) in the Python random library. However, as dis-cussed earlier in Section A:10.4, we have frequent need for non-uniformrandom distributions. Sometimes the ones we need are not standard at all,and we must program them ourselves. For simple ones, we can use analyticinversion to generate the desired distribution from a uniform distribution.For others, the rejection method will always work (see Section A:10.B).

Page 149: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

10.2. Traffic flow 141

0.0 0.2 0.4 0.6 0.8 1.0Avg. speed

0.0

0.2

0.4

0.6

0.8

1.0

Flow

congested flow

free flow

0.0 0.2 0.4 0.6 0.8 1.0Density

0.0

0.2

0.4

0.6

0.8

1.0

Avg. sp

eed

Figure 10.3: Traffic flow as a function of average speed (left) and the speed-density relationship (right), all normalized to one.

We have enough information to simulate simple traffic flows. To sim-plify matters, we set vmax = 1 and work with dimensionless parameters.Distribute n vehicles over a unit length randomly, and assign a speed toeach vehicle according to the Gaussian distribution (10.4). Let the systemrun for N time steps, check how many vehicles have passed a checkpoint tocalculate the average flow (see Project P10.2).

The results are shown in Figure 10.3. The standard deviation is assumedto be σ = 1

10vmax. We see the normalized flow follows the theoretical curve

as expected. There are fluctuations which will grow if σ is increased. Tothe right side of the maximum flow peak, the traffic flows freely. Increasingthe density (toward smaller speed, Figure 10.3, right) will increase the flow.To the left side of the peak, the traffic flow is congested because the densityis high. On this side the flow is said to be unstable. The action by asingle vehicle suddenly slowing down, for example, can cause a dominoeffect causing traffic jams.

The speed-density relation (Figure 10.3, right) is mostly linear exceptfor higher densities. This is a result of vehicles adjusting speeds to avoidcollisions that are more frequent at higher densities. This also causes theflow diagram to be slightly asymmetric at the opposite ends.

Page 150: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

142 Chapter 10. Simple random problems

10.3 Ants raiding patterns

We saw in the previous examples and in Section A:10.4 that we could breakdown a complex integral into many simple operations, each just a functionevaluation. Many complex phenomena arise from simple fundamental in-teractions in nature. A representative example of complexity arising fromsimplicity is an ant colony. Ants may seem like simple animals but they ex-hibit surprisingly complex social behavior. They may run around randomly,sometimes travel along trails moving food or larvae like a well-organizedarmy, and occasionally form large swarms congregating for reasons not ob-vious to us.

Ants interact with each other via simple chemical signals known as thepheromone [9]. We are interested in how these simple interactions at thefundamental level lead to complex collective behavior [34]. For example,how do ants searching for food form trails? How efficient are they?

x

y

nest

food

(x+1,y+1)

(x+1,y)

(x+1,y−1)

(x,y)

Figure 10.4: The three possible moves of a foraging ant.

To help us to understand the essential features, we construct a verycrude model to simulate the raid patterns of foraging ants. We assumea two-dimensional grid which contains a nest and a food source. Antsmove away from the nest searching for food. Once they find food, theyreturn to the nest leaving a trail of pheromone along the way. We assumetheir movement occurs over discrete iterations. Its movement for the nextiteration is determined by the following model.

Page 151: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

10.3. Ants raiding patterns 143

nest

food

t=0

nest

food

t=100

nest

food

t=300

Figure 10.5: Raid patterns of foraging ants over time. The searching andreturning (with food) ants are denoted by I and J, respectively.

Let the current position of an ant be (x, y). For a searching ant (searcher,Figure 10.4), its next move, denoted by (x′, y′), will be one step toward thefood source on the right, but can be up, down, or stay at the same y. To

Page 152: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

144 Chapter 10. Simple random problems

build a model, we propose the move probabilities as

P (x′, y′) =P0(y

′)

PT

(1 + qφ(x′, y′)), (10.5)

x′ = x+ 1, y′ = y, y ± 1, PT =∑

y′

P0(y′)(1 + qφ(x′, y′)),

where P0(y′) is a base probability, and q a multiplier to the pheromone level

φ(i, j) at grid point (i, j). The factor q measures the effectiveness of thepheromone. For returning ants carrying food (carrier), we replace x′ = x−1in Eq. (10.5).

Once a move is made, we examine the outcome. If the position (x′, y′)is out of bounds, the ant is removed from the grid. If a carrier reaches thenest, it is also removed. If a searcher has found the food site, its state ischanged to a carrier. A carrier deposits a unit of the pheromone at thenew position. The pheromone level at a given grid point accumulates fromcarrier visits, but it is limited to a maximum value φmax. It also evaporatesin each iteration at some rate.

Let the grid size be M × N . Create a 2D array φ[i, j] for storing thepheromone level at each site. The ants are tracked in a n × 3 list, a,specifying the state (searcher or carrier) and positions (x, y) of the n ants.The search algorithm can be outlined as follows.

• Initialize M and N , place the nest and food sites on the grid, andinitialize φ[i, j] = 0 over the grid. Set the parameters P0, q, φmax,and nmax, the maximum number of ants on the grid in each iteration.Distribute nmax searcher ants randomly on the grid and store themin the list a.

• Determine the next position of each ant according to the probabilitydistribution (10.5). Change the state of searchers to carriers if theyreach the food site. Remove from a the ants that moved off the gridor returned home.

• Adjust the pheromone level. Add a unit of pheromone for each carrierant at their new position to the existing value φ[i, j], to the maximumallowed value.

Decrease the pheromone value for all sites by a given amount in eachiteration. Keep the rate small initially to speed up the formation oftrails.

Page 153: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

10.4. Exercises and Projects 145

• Replenish the population for the lost ants, up to nmax total ants.Place them at the nest site, all being searchers. Repeat the secondstep.

The simulation results are shown in Figure 10.5 (see Project P10.3).The grid size is 60 × 30, and the nest and food sites are at (1, 5) and(58, 25), respectively. There were 200 ants on the grid at the start of eachiteration. They were placed randomly initially. Upon finding food, carriersstart depositing pheromones on the return trip. Searchers follow the trails,find the food, and re-enforce the trails. Patterns quickly form, and after100 iterations (t = 100), we see two main trails. The lower trail wins out,and the pattern stabilizes at t = 300. We see occasional stragglers, but theoverall path is remarkably narrow and well-defined.

We can think of this behavior as guided random walks. Simple, individ-ual decisions give rise to a collective behavior benefiting the colony. Thoughour simple model is rather crude, it yields an efficient foraging pattern be-cause it contains the essential communication mechanism. It is interestingthat the simulation settled on the lower route instead of the upper oneat t = 100, both of which seem equally efficient in length. This may bedue to the particular initial configuration, or that we have not waited longenough. In general, we need to run Monte Carlo simulations from multipleconfigurations to build a statistically reliable profile (see Chapter A:11).

10.4 Exercises and Projects

Exercises

No additional exercises.

Projects

P10.1 (a) Modify Program 10.1 to animate the cell grid with VPython. SeeProgram A:7.2 for an example of grid representation.

(b) Optimize Program 10.1, in particular the chkcell() function. Itis wasteful to check each cell up to 8 times in each iteration. A moreefficient approach is to set up another 2D array containing the num-ber of living neighboring cells at each grid point. This array can be

Page 154: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

146 Chapter 10. Simple random problems

incrementally updated during each iteration when necessary. Whena cell changes state, update the 8 neighbor counters by subtracting1 if a living cell dies, or adding 1 if a dead cell revives.

Test and profile your code (see Section 8.B).

Figure 10.6: The game of life with a different rule.

(c) Simulate the game of life with slightly different rule. For instance,allow a dead cell to become alive when surrounded by four living cellsinstead of three. Observe and list the differences in the patterns withthe standard rule. Figure 10.6 shows a period-two pattern that wouldbe period-one under the standard rule.

P10.2 Study traffic flow using the following hybrid model. Assume a unitlength of roadway and a unit speed limit vmax. Let L be the averagelength of the vehicles including stopping distance, so the maximumdensity is λmax = nmax = 1/L.

(a) Write a program to simulate the flow for different densities. Placen vehicles over the road randomly, making sure no vehicles overlap,i.e., the distance between two adjacent vehicles must be at least L.This may be done by dividing the road into grids of width L, andplacing the vehicles randomly on the grid (the shuffle function in therandom library works well). Set λ = n, and initialize the speed ofeach vehicle according to the Gaussian distribution with a standarddeviation σ = 1

10vmax, and v from Eq. (10.2).

Choose a step size h ∼ 10−2 − 10−3, and N ∼ 102 for the number ofiterations, but the optimal value depends on h and n, so experimenta little. During each iteration, move the vehicles by vih where vi isthe speed of vehicle i. Check for possible collisions between adjacentvehicles. A collision occurs if either the vehicle behind has overtakenthe vehicle ahead, or the separation between them is less than L. If

Page 155: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

10.4. Exercises and Projects 147

this does occur, restore the vehicle’s position, freeze it, and set itsspeed to zero in this round.

After each iteration, calculate and record the average speed, countthe number of vehicles, M , that have passed the end of the road,and place these vehicles to the back of the queue (wrap around) asif we are using periodic boundary conditions. Reset the speed to theGaussian distribution, and repeat.

After N iterations, calculate the mean speed 〈v〉n and mean flow〈φ〉n. The latter can be done in two ways, by λ〈v〉, and by M/T ,where M is the total number of vehicles past the end of the road,and T the total time.

Calculate 〈v〉n and 〈φ〉n for n = 1 to nmax. Plot the results simi-lar to Figure 10.3. Are there differences between the two ways ofcalculating the flow?

(b) After the simulation runs correctly, add vehicle animation usingVPython. Experiment with different speed distributions. For in-stance, at the beginning of each iteration, set the speed of individualvehicles proportional to the distance to the vehicle ahead, up to thespeed limit. Also try different parameters and methods, such as thecollision handling, etc.

P10.3 Investigate the formation of ants raiding patterns.

(a) Implement the model outlined in Section 10.3. Initialize the gridsize to M = 60 and N = 30. Place the nest at (1, 5) and the foodsource at (59, 25). Allow nmax = 200 active ants on the grid at thestart of each iteration. Create a nmax×3 list a that stores the statesand coordinates of the ants, such that a[i, 0] contains the state (0 forsearchers and 1 for carriers), and a[i, 1] and a[i, 2] hold the positionx and y, respectively. Initially, set the ants to be searchers and placethem randomly on the grid. This reduces the time required to formthe raiding patterns.

Create an M × N array φ (consider using a Python nested list forspeed) to hold the pheromone level on the grid, initialized to zeroat the start. Set the base probability (10.5) to P0 = 1 independentof direction, the maximum pheromone level φmax = 20, and thepheromone multiplier q = 10. Maintain the maximum level φmax at

Page 156: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

148 Chapter 10. Simple random problems

the nest and food sites so ants in their neighborhoods have a highprobability of entering them.

Write a modular updater function that carries out several subtasks,preferably in separate modules, in each iteration. It should obtainthe probabilities (10.5) of moving to the three grids ahead (Fig-ure 10.4). For searchers, use the same base probability P0 for allthree grids. Due to the crudeness of our model, it can take a longtime to establish stable trails unless we “nudge” the carrier ants tothe nest (homing ability). To do so, increase the base probabilityof a carrier in the y direction toward the nest by some factor (e.g.,3). If the carrier is below the nest, the probability to the upper-leftgrid will be enhanced by that factor. If it is above the nest, theprobability to the lower-left grid should be multiplied by the samefactor.

Once the probabilities are calculated, pick a move based on thesevalues, move the ant, and determine the status of the ant includingits state (searcher to carrier) and whether it should be removed inthe next iteration (out of bounds or into the nest). If so, append it tothe removal list. A food carrier also deposits one unit of pheromoneunless it is on the removal list or the grid point is already at φmax.Make sure to modify a copy of the current pheromone array φ (usedeepcopy if it is a nested list) so changes will not affect the proba-bilities in the current iteration.

At the end of an iteration, delete the ants in the removal list fromthe master list a, e.g., the following will delete element n from a list,

del a[n]

Dispatch new ants out of the nest equal to the number of the re-moved ants. Finally, reduce the pheromone level on the grid due toevaporation, but only after some initial iterations (say 50) and at amoderate rate (say 1

5unit per iteration).

Animate the ants using VPython which we are very familiar with,or using Matplotlib animation (see Program A:10.2 or A:11.4). Wemade Figure 10.5 with the latter.

(b) Calculate the efficiency factor, e, defined as the ratio of the aver-age number of carriers entering the nest to the number of active ants

Page 157: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

10.A. Program listings and descriptions 149

on the grid. Compute this ratio as a running average over a fixednumber of iterations (say last 10).

Optionally, for a given initial condition, record the number of itera-tions, Ns, required to reach a stable pattern. We consider a patternstable if the efficiency factor e stabilizes to within some range, e.g.,1% to 5%. Repeat the simulation for different initial conditions toobtain an average Ns.

Explore the effects of the parameters on e and Ns. For instance, varythe multiplier q in the range 4 to 10, and plot the average e and Ns.What is the critical qc below which no stable patterns are observed?

(c) Modify aspects of the model according to your own hypothesis.For example, what if both searchers and carriers deposit pheromone,but with different amounts and maximum levels? Discuss the results.

10.A Program listings and descriptions

Program listing 10.1: Game of life (life.py)

1 import random as rndimport matplotlib.pyplot as plt

3

def chkcell ( cell , i , j ): # count living neighbor cells5 alive = 0

if ( i > 0): alive += cell[i−1][j ] # left7 if ( i < N−1): alive += cell[i+1][j ] # right

if (j > 0 ):9 alive += cell[i ][ j−1] # below

if ( i > 0): alive += cell[i−1][j−1] # below left11 if ( i < N−1): alive += cell[i+1][j−1] # below right

if (j < N−1):13 alive += cell[i ][ j+1] # above

if ( i > 0): alive += cell[i−1][j+1] # above left15 if ( i < N−1): alive += cell[i+1][j+1] # above right

return alive17

def update(cell ): # advance to next generation19 newcell = [[0]∗N for i in range(N)]

Page 158: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

150 Chapter 10. Simple random problems

for i in range(N):21 for j in range(N):

alive = chkcell( cell , i , j )23 if ( cell [ i ][ j ] == 1):

if ( alive >=2 and alive <= 3): # stay alive25 newcell [ i ][ j ] = 1

elif ( alive == 3): # become alive27 newcell [ i ][ j ] = 1

return newcell29

N = 5 # N = grid size31 cell = [[0]∗N for i in range(N)] # initialize

for i in range(N):33 for j in range(N):

x = rnd.random() # random number35 if (x < 0.5): cell [ i ][ j ] = 1

37 fig = plt. figure ()for i in range(12):

39 ax = fig.add subplot(3, 4, i+1)ax.grid(True, linestyle =’-’, color=’r’, linewidth=2)

41 plt . imshow(cell, cmap = plt.cm.binary, origin=’lower’,interpolation=’none’, extent=(0,N,0,N))

43 plt . xticks (range(N+1),[’’]∗(N+1))plt . yticks (range(N+1),[’’]∗(N+1))

45 cell = update(cell)plt .show()

The program simulates the game of life. The module chkcell() checksthe neighbor cells of cell [i, j], and returns the number of living cells sur-rounding it. The conditional statements ensure proper boundary condi-tions. The update() function calls chkcell() and applies the rules of thegame (Section 10.1), returning a new grid for the next generation of cells.

The main program initializes the cell grid randomly (line 34). If therandom number is x < 1

2, the cell is set to be alive, so on average half of the

cells are alive. The next loop plots the cells on a grid using imshow(), withgrid turned on and axis labels off. The parameter extent fills the gridsup with pixels rather than center them on the grid points. The cell grid isupdated in each iteration.

Page 159: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

10.A. Program listings and descriptions 151

The program takes a direct approach, making no attempt at optimiza-tion. For larger grids, it may be worthwhile to optimize chkcell() (seeProject P10.1).

Page 160: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566
Page 161: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Chapter 11

Thermal systems

We study non-thermal systems such as the free-hanging chain and the trav-eling salesman problems via simulated annealing. We also discuss MonteCarlo simulations of particle transport in 1D and 2D using realistic scatter-ing cross sections to investigate range distribution and energy deposition.We explore the behavior of an ideal boson gas and the Bose-Einstein con-densation. The mean field approximation of 2D Ising model is discussedin Section 11.A. Finally, we use the F2Py language extension to optimizemolecular dynamics simulations.

11.1 Thermal relaxation of a suspended

chain

We first discussed the problem of a free hanging chain in Section 6.1. There,we applied the time independent (self-consistent FDM/FEM) and time-dependent (numerical integration of equations of motion) methods to theproblem. Below, we discuss temperature-dependent solutions by simulatedannealing.

Consider the schematic representation of the chain in Figure 11.1. Ifwe model the chain as having a fixed length (unstretchable), the problembecomes finding the optimal shape with the lowest energy. We imagine the

153

Page 162: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

154 Chapter 11. Thermal systems

initial

final

net vector

Figure 11.1: Schematic relaxation of a free hanging chain.

chain consisting of N segments (links). We can deform the chain in sucha way that the N segment vectors add up to a constant net vector in therelaxation process from the initial state to the final state.

To apply the Metropolis method, we propose to move a randomly se-lected link by a certain displacement and calculate the associated changein potential energy. If the move is accepted, and because the links are as-sumed to be unstretchable, we would have to move other links to satisfythe constant net vector condition. This would cause the energy to change,which would force a re-sampling, etc., in an endless loop.

So we give up the idea of an unstretchable chain. We wish to keep thechange simple and local as in the Ising model. Therefore, we use the samemodel as in the time-dependent approach in which we treat the chain as anarray of N particles (mass m) linked by springs (Figure A:6.22). Particlesinteract with their nearest neighbors as in the Ising model, but throughHooke’s law.

Let ~ri be the position vectors of the particles, i = 1, 2, ..., N , and letdi = |~ri − ~ri+1| be the distance between neighboring particles i and i + 1.A microstate of the system is the set of position vectors ~ri having a well-defined energy including both elastic and gravitational potential energies.

Consider the thermal displacement of a randomly-selected particle i,∆~r = (∆x,∆y). The new position vector is ~r = ~ri +∆~r. The change of en-ergy for this trial move is the sum of changes of the elastic and gravitationalpotential energies,

∆E =1

2k[(d− − l)2 + (d+ − l)2 − (di−1 − l)2 − (di − l)2

]+mg∆y, (11.1)

where d∓ = |~r−~ri∓1| are the new distances to the left and right neighbors,respectively, k is the spring constant (not Boltzmann constant), and l the

Page 163: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.1. Thermal relaxation of a suspended chain 155

relaxed spring length.

We only need to specify a method for choosing the random displace-ment ∆~r to apply the Metropolis algorithm. The displacements should bevariable at a given temperature, and ideally the average magnitude shoulddecrease with lower temperatures. The Gaussian distribution (10.4) fits ourneed well. It describes the random walk distribution (Figure A:10.3) in thelimit of many steps, and the average magnitude can be controlled by thewidth.

The following code snippet shows the actual algorithm.

def update(N, x, y, d, kT, E):i , move = rnd.randint(1, N−2), 0xt, yt = x[i]+rnd.gauss(0., sd), y[ i]+rnd.gauss(0., sd)dl = np.sqrt((xt−x[i−1])∗∗2 + (yt−y[i−1])∗∗2) # left/right distdr = np.sqrt((xt−x[i+1])∗∗2 + (yt−y[i+1])∗∗2)d2 = (dl−L)∗∗2 + (dr−L)∗∗2 − (d[i−1]−L)∗∗2 − (d[i]−L)∗∗2dE = 0.5∗spring k∗d2 + mass∗g∗(yt − y[i]) # pot energiesif (dE < 0.0): move=1 # Metropolis algorithm......

We randomly select a particle i for a trial move, excluding the twoparticles fixed at the end of the chain. The x and y displacements areobtained from the Gaussian distribution, a Python function gauss(0, sd),with zero mean and standard deviation sd. They are added to the currentposition to give the trial position (xt, yt). Next, the distances to the leftand right neighbors are calculated, which are used to obtain the change ofenergy. What follows is exactly the same as in the code for the Ising model(Program A:11.6), and the proposed move is accepted or rejected based onthe Metropolis algorithm. If accepted, the position, particle distances, andenergy are updated.

The full program using the above algorithm is given in Program 11.1.Figure 11.2 shows the test results for a 41-particle chain. The parametersare: particle mass m = 0.1 kg, spring constant k = 20 N/m, and relaxedlength l = 1 m. The energy unit is joules, but to be consistent and general,we will call it ε as before (ε = 1 for convenience). We used a temperature-independent standard deviation for the Gaussian distribution, σ = 1

4l. The

final solution should be independent of σ, but convergence is.

The behavior of energy as the system equilibrates (Figure 11.2, left)at T = 2 shows the typical behavior expected of a real thermal system.

Page 164: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

156 Chapter 11. Thermal systems

0 1 2iteration (×105 )

−400

−350

−300

−250

−200

Energy (arb.)

10-3 10-2 10-1 100 101

Temperature (arb.)

−600

−400

−200

0

200

Energy (arb.)

total

elastic

gravitational

Figure 11.2: The energy of a free hanging chain toward equilibrium atkT/ε = 2 (left) and in equilibrium as a function of temperature (right).

The fluctuations are relatively large owing to the small size of the systemN = 41 at above intermediate temperature. The rate of convergence toequilibrium depends on the average magnitude of the displacement, σ. If σis too large, fewer trial moves are accepted. If it is too small, many moremoves are required to effect change. We find a σ value at a fraction of lworks well.

Once the system is in equilibrium, we can take averages such as energybefore lowering the temperature and rethermalizing. In our case, the simu-lation starts at T = 20, and the temperature is halved between equilibria.The equilibrium energies decreases quickly initially (Figure 11.2, right) withdecreasing temperature. The elastic and gravitational potential energies arecomparable in magnitude in the beginning when higher temperatures causethe interparticle distances to be compressed or stretched (Figure 11.3). Butthe gravitational potential energy decreases faster as the chain falls becausegravity is the driving force toward the final equilibrium. The changes are notmonotonic as overrelaxation can also occur. For instance, at T = 1.25, thegravitational energy becomes temporarily lower than the global minimum,accompanied by an increase in elastic energy (to a local maximum) anda nearly flat total energy. This corresponds to an overshoot (Figure 11.3,curve 4). For this reason we should thoroughly thermalize the system be-fore lowering the temperature. It is important in more complex problemsto reduce the chance of getting stuck in a local minimum. Below T ≤ 0.1,the energy changes very slowly.

Page 165: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.1. Thermal relaxation of a suspended chain 157

xT

12

34

812

y

initial

1

2

3

4

8

12

Figure 11.3: The 3D and headon views of relaxation of a free hangingchain at different temperatures. The temperature of a curve with label l iskT/ε = 20/2l.

Figure 11.3 displays the different stages of the relaxation process. In theinitial state, the particles are randomly positioned in such a way that thedistances between neighbors are equal at l (relaxed springs), and the endparticles are at the y = 0 vertical position. After thermalization begins, thechain falls downward and is stretched. Large changes can be observed at thefirst few stages (T = 10, 5, and 2.5 for curves 1−3). This is consistent withthe large change in the equilibrium energies in Figure 11.2. Interestingly,portions of the chain over shoot their final equilibrium positions at T = 1.25(curve 4). We had seen the same behavior (Figure 6.2) modeling the systemas an actual mechanical system. At the lowest temperature T ∼ 5× 10−3,the chain is roughly at the lowest energy state and in the shape of a catenary(Figure 6.1). The final average distance between neighboring particles isabout 1.6l. We note that our approach here is essentially the same as theoriginal Metropolis problem [26] which used hard spheres instead of springs.

The traveling salesman problem

As the above example showed, simulated annealing is useful for problemsthat may not be thermodynamic in nature but are made as if they were. Thetraveling salesman problem is another example of minimization unrelated

Page 166: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

158 Chapter 11. Thermal systems

to thermodynamics. The problem is as follows. A salesman is going to visitN cities. The rule is simple: each city is visited only once, starting andending from the same city. But the budget is tight, so the goal is to findthe route with the minimum distance of travel. For large N , this problemis a complex one, with many local minima, and finding the global minimumis elusive because of the huge number of permutations even for a moderateN [30].1

(a) (b)

(c) (d)

Figure 11.4: A traveling salesman problem by simulated annealing. Theitinerary includes 20 cities, starting from the ?-city. (a): the initial randomroute; (b)–(d): routes with decreasing temperatures.

Within simulated annealing, we can treat a route (order of visits) as a

1The computational cost of the traveling salesman problem is uncertain as to whetherit scales like a polynomial Nn for some constant n. It is in the category known as NPproblems (nondeterministic polynomial time).

Page 167: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.1. Thermal relaxation of a suspended chain 159

microstate, and the distance for that route as the energy,

E =∑

i

[(xi − xi+1)

2 + (yi − yi+1)2]1/2

, (11.2)

where (xi, yi) are the positions of the cities.To sample the system with the Metropolis algorithm, we propose to

swap the order of two randomly-selected cities, i, j, as

· · · , i− 1, i , i+ 1, · · · , j − 1, j︸ ︷︷ ︸

, j + 1, · · · (11.3a)

· · · , i− 1,

swap ij︷ ︸︸ ︷

j , i+ 1, · · · , j − 1, i , j + 1, · · · (11.3b)

The change of distance (energy) for the trial swap is

∆E = di−1,j + dj,i+1 − di−1,i − di,i+1

+ dj−1,i + di,j+1 − dj−1,j − dj,j+1, (11.4)

where di,j denotes the distance between cities i and j.The first line in Eq. (11.4) computes the difference in distance between

[i − 1, j, i + 1] and [i − 1, i, i + 1] when city i is replaced by city j. Thesecond line does the same between [j − 1, i, j + 1] and [j − 1, j, j + 1] whencity j is replaced by city i. Half of the terms disappear from Eq. (11.4) if iand j are adjacent to each other.

Figure 11.4 shows the simulation results of the traveling salesman prob-lem for N = 20 (see Project P11.4 for implementation). The cities arerandomly distributed over the unit square. The initial random route has adistance of 9.667. We set the initial temperature at T = 0.2 and let the sys-tem equilibrate. The distance decreases to 6.352 for one equilibrium routeat this temperature (Figure 11.4, (b)). The thermal process is repeatedwhile temperature is gradually lowered. By T = 0.025 (Figure 11.4, (c)),the distance has been reduced to 4.326. This is slightly higher than theglobal minimum, 4.235, at the lowest temperature T ∼ 10−3 (Figure 11.4,(d)). No further changes occur afterwards. We observe a clear trend thatthe simulation tries to avoid routes crossing path with itself, as that usuallyincreases the path length.

The difference between the last two routes (Figure 11.4, (c) and (d))is due to the order of close-packed cities in two regions, a twin-city at the

Page 168: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

160 Chapter 11. Thermal systems

lower left side and a tri-city near the upper right side. If N is large, itcan take a long time and at very low temperatures to navigate through thesmall changes to find the global minimum. Sometimes the system settlesinto a local minimum and is stuck. The temperature has to be raised tountrap the system.

11.2 Particle transport

Particle transport refers to the process of energetic particles or radiationmoving through media such as solids or gases. The radiation beam may con-sist of charged or neutral particles (electrons, protons, neutrons) or photons(x-ray or gamma rays). Unlike molecular dynamics where the particles in-teract symmetrically in a common equilibrium, the interactions between theenergetic particles and the media are asymmetric, and the transport par-ticles never reach equilibrium with the media unless they survived till thevery end. Since many interactions are involved, Monte Carlo simulationsare generally used in particle transport.

θ′

~v ~v′θ′′

~v′′

x

θ

Figure 11.5: Particle transport in a medium. On average, a particle encoun-ters an atomic collision once every mean free path. The scattering angle isrelative to the previous direction.

As a particle traverses the medium, it interacts with many target atoms(Figure 11.5). In each interaction, the particle can be scattered elastically orinelastically, or absorbed. We will limit our discussion to charged particlesgoing through a uniform medium, assuming no absorption and followingthe particles until they exit the medium or reach thermal energies withinthe medium. In particle transport problems, we are interested in knowing

Page 169: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.2. Particle transport 161

things such the depth of penetration, energy deposition, etc. They areimportant to practical applications, including radiation physics, dosimetryand protection, and fusion energy research. Due to many-body interactions,the problem of particle transport is a complicated one.

In a conceptually simplifying picture, we can view the transport pro-cess as a series of random binary interactions. That is to say, the particleencounters atoms one at a time. There is no memory effect between twosuccessive collisions. The particle travels in a straight-line path between twosuccessive collisions (Figure 11.5). We can follow the path of one particle ata time, and build up statistically meaningful results by Monte Carlo sim-ulations of many particles. We briefly consider three aspects of transportincluding frequency of collisions, energy loss, and scattering angles.

11.2.1 Mean free path and stopping power

The frequency of collisions depends on several factors such as the particle’scross section, energy, and target density. A useful quantity is the mean freepath λ defined as the average distance between two successive collisions. Itis given by

λ =1

nσ, (11.5)

where n is the target atom density, σ the cross section. The cross sectionis proportional the probability of collisions that has a dimension of an ef-fective area. It is different from the geometric cross section of the particle,and depends on the kind of interactions with the target (see scattering inChapter 12).

In terms of the mean free path, we can model the distance of travel, s,between collisions as an exponential distribution (A:10.46)

Ps(s) = λ−1e−s/λ. (11.6)

Equation (11.6) means that, after a collision at s = 0, the probability offree motion until the next collision decreases exponentially as the distanceof travel.2

For charged particles traversing the medium, they lose energy due toa variety of events. At all but the lowest energies, electronic excitationand ionization of target atoms are the dominant mechanism for energy loss.

2See similar explanations leading to Eq. (9.6) and the Poisson distribution (9.7).

Page 170: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

162 Chapter 11. Thermal systems

At high energies, the excitation and ionization cross sections (probabili-ties) are small because the interaction time is short. The cross sectionsincrease with decreasing energies, reaching a maximum at intermediate en-ergies ∼ 25 − 100 keV/amu (see Figure 12.5 and Figure 12.6 for actualexcitation and ionization cross sections). As the particles slow down pastthe optimal energy for excitation and ionization, the cross sections startto decrease, for the particles cannot simultaneously supply the energy andmomentum transfers required for excitation or ionization. At low ener-gies, atomic reactions cease to be important, and inelastic nuclear collisionsbecome the dominant mechanism in energy dissipation down to thermalenergies.

It would be quite impractical to calculate atomic or nuclear reactioncross sections within the Monte Carlo simulations of particle transport.Standalone theoretical calculations can be a challenging task involving many-electron interactions, and in many cases they need to be determined fromexperimental measurements (see Section 12.4). It is more convenient, anduseful, to introduce the concept of stopping power to characterize energyloss [31]. It is defined as the change of energy per unit distance of travel

S(E) = −dEdx

. (11.7)

The negative sign indicates that the particle’s energy E decreases as ittravels through the medium. Effectively, the stopping power S acts like adissipative force slowing down the particle.

The qualitative behavior of the stopping power is shown in Figure 11.6(in arbitrary units). We use the semilog energy scale to show more clearlythe dependence at low energies. The stopping power is proportional to reac-tion cross sections. From intermediate to high energies, electronic excitationand ionization cross sections dominate, as stated earlier (see Section 12.4 ondiscussion of atomic reactions). The stopping power peaks at intermediateenergies. On the higher energy side, atomic collision theory predicts thatS scales asymptotically as (see Eq. (12.54))

S(E 1)→ 1

ElnE. (11.8)

Toward the lower energies, electronic contributions decrease and energyloss in nuclear collisions becomes important. The latter levels out at thelowest energies (around thermal energies). The dependence of S on energy

Page 171: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.2. Particle transport 163

10-2 10-1 100 101 102

Energy

0

1

2

3

4

5

Stopping power

nuclearelectronic

total

Figure 11.6: Qualitative dependence of stopping power on energy.

from low to intermediate regimes is not exactly known, in part becauseit involves both atomic and nuclear contributions. It is very difficult toaccurately calculate atomic cross sections at low energies.

For our purpose, we will use a simple empirical formula in our simula-tions as

S(E) = f1 + a ln(1 + bE)

1 + bE. (11.9)

The parameter f is a scale factor that depends on the properties such asthe interaction strength and the target density. The other two parameters,a and b, control the height and position, respectively, of the peak in Fig-ure 11.6. The value of b therefore defines the scale of energy. Equation (11.9)correctly reproduces the asymptotic limits at E 1 and E 1.

Accompanying the energy loss is the emission of secondary particles,electrons and photons, emitted in excitation and ionizing reactions. Thesecondary emissions could cause a cascading effect of post-secondary emis-sions, etc., and the domino effects can become difficult to manage. We willignore secondary emissions, and assume the energy is deposited locally atthe point of collision. This approximation does not affect the basic behav-iors of particle transport. In full scale simulations, it is often necessary totrack the more energetic secondary cascading emissions.

Page 172: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

164 Chapter 11. Thermal systems

11.2.2 Particle transport in 1D

For high energy particle transport, the trajectories are mostly straight linesuntil the very last moments. Therefore, the transport is one dimensional toa good degree of approximation. The Monte Carlo algorithm for simulatingone dimensional transport is now straightforward. For each particle, wesample the distance of travel, s, between collisions according to Eq. (11.6),and calculate the energy loss as −∆E = S(E)s. We record the position ofthe particle, adjust its energy E → E + ∆E, and repeat the process untilthe energy becomes zero. The position of the particle is the range. Theactual code is given below (from Program 11.2).

1 def depth(E, x=0.): # depth at E, default x=0while E > 0.:

3 dx = −np.log(rnd.random())E = E − sp(E)∗dx

5 if (E > 0.): x += dxreturn x

The above module computes the range of a single particle. It draws thedistance of travel from an exponential distribution with a mean free pathλ = 1, which defines the length scale. The energy is changed according tothe stopping power (11.9) returned by sp(E) and the displacement. Theposition is updated if the energy remains positive. When the particle haslost all its energy, the transport process is terminated, and its last positionis the range. In the Monte Carlo simulation (Program 11.2) depth() iscalled repeatedly for N particles, and a profile can be established about therange distribution.

Figure 11.7 shows the range distribution of N = 3000 particles. Thebeam energy is E = 10. The parameters for the stopping power are chosenas a = 10, b = 2, and the dissipation factor is f = 1

2. The histogram

shows that most particles fall in the range between 8 − 10, with a sharpconcentration at depth ∼ 9 for this energy. The average depth can be foundfrom

d =

∫ d

0

dx =

∫ E

0

dE ′

S(E ′). (11.10)

Equation (11.10) yields a value d = 8.9 for the parameters used, in goodagreement with the observation from Figure 11.7. We have a continuousdistribution rather than a single point in Monte Carlo simulations because

Page 173: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.2. Particle transport 165

0 2 4 6 8 10 12depth (arb.)

0

10

20

30

40

50

60

70

80

90

count

Figure 11.7: The range distribution in 1D particle transport.

the particles are scattered randomly. In terms of energy loss, the particlestend to lose less energy in the beginning when they are fast and moretoward the end when they are slow. Therefore, energy deposition, thoughnon-localized, is highest near the peak in the range distribution. This is animportant property in radiation physics and dosimetry.

We also observe a forward-backward asymmetry near the peak. Theforward part of the asymmetry is exaggerated by our crude approximationdue to the abrupt cut-off of the range once energy is negative. A morereasonable approach is to calculate the last displacement proportional tothe available energy. This would have the effect of moving the particlesin the forward peak further back, reducing the asymmetry. Nonetheless,our simple 1D particle transport model predicts the qualitatively correctbehavior of concentrated particle density and energy deposition.

11.2.3 Particle transport in 2D

To describe particle transport in higher dimensions, we need to take intoaccount the scattering of particles in different directions. Even at high in-cident energies, the particles will eventually slow down, and straggle froma straight-line path. In this section we will assume central field interac-tions between the incident particles and the target atoms. In this case, the

Page 174: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

166 Chapter 11. Thermal systems

particle transport is axially symmetric, and the 3D transport problem iseffectively reduced to a 2D problem.

To begin, we need a realistic description of the scattering cross sections.An appropriate model for the scattering potential is the screened Coulomb(Yukawa) potential,

V = Z1Z2e2 e

−αr

r, (11.11)

where Z1 and Z2 are the charges of the particle and the target atom, re-spectively, and α is the screening constant. For close encounters r → 0,the particle “sees” a bare nucleus and the potential is Coulombic, ∼ 1/r.At large distances, the electrons effectively screen out the target nucleus,and the potential decreases exponentially relative to the Coulomb tail. Theeffective range is on the order of α−1, called the screening length. The valueof α = 0 represents an infinite screening length, i.e., no screening at all.

The scattering cross section3 for the potential (11.11) is (see Chapter 12,Eq. (12.13))

dΩ=

(2mZ1Z2e

2

~2

)2 [

α2 +8mE

~2sin2 θ

2

]−2

, (11.12)

wherem and E are the mass and energy of the particle. If α = 0, Eq. (11.12)reduces to the well-known Rutherford scattering cross section Eq. (A:12.9).The sin2 θ

2factor ensures that the cross section is heavily tilted to small

angles except at very low energies.With the scattering cross section specified, we can simulate particle

transport in 2D as follows. Let (x, y), and θ be the current position anddirection of travel of the particle, respectively. For each particle:

1. Sample the distance of travel before the next collision from the expo-nential distribution (11.6); denote it by s.

2. Select a scattering angle, θ′, from the cross section (11.12) by impor-tance sampling. This angle in the range [0, π] is relative to the currentdirection of travel (Figure 11.5).

3. Draw a random number to determine whether the particle is scatteredto the left or right with equal probability. Update the angle θ → θ±θ′,

3Strictly speaking, this is an elastic scattering cross section, where energy loss isminimum. We are decoupling scattering from energy loss, an approximation.

Page 175: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.2. Particle transport 167

depending on the outcome of the previous step. Change the positionand energy according to

x→ x+ s cos θ, y → y + s sin θ, E → E − S(E)s.

4. Repeat steps 1 − 3 until the energy is zero or negative. Record theposition of the particle, terminate and return.

−5 0 5 10 15 20−6

−4

−2

0

2

4

6

mass=1

y

−5 0 5 10 15 20x

−6

−4

−2

0

2

4

6

mass=100

y

Figure 11.8: Typical paths for light (top) and heavy (bottom) particles.

We carry out the above process for each of the N particles, and we canobtain various results including depth distribution and energy deposition.Below we will discuss representative results, leaving implementation work toProject P11.6. In Figure 11.8 we show typical trajectories for light particles(m = 1) and heavy particles (m = 100). To compare them directly, theyare graphed on the same scale. The light particles (e.g., electrons) aremore susceptible to large angle scattering, resulting in a larger spread. Oneparticle suffers a hard collision and is scattered to the reverse direction of

Page 176: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

168 Chapter 11. Thermal systems

12 14 16 18 20x

−3

−2

−1

0

1

2

3

y

0 5 10 15 20x

0.0

0.2

0.4

0.6

0.8

1.0

counts (rel.)

Figure 11.9: The range distribution as a scatter plot (left) and histogram(right, normalized), in 2D particle transport.

the initial travel. In contrast, the heavy particles are scattered mainly inthe forward direction. As a result, they remain in near straight line motionfor most of the path, and the final distribution is much more localized.

In the results shown below, we assume heavy particles only. Figure 11.9shows the range distribution as a scatter plot in space (left) and as his-tograms (right, normalized to 1 at the maximum). The beam energy isE = 5, and the parameters for the stopping power are a = 10, b = 5, andf = 0.2. A total of N = 4000 particles are used to obtain the results.

Most particles are narrowly distributed within the ranges |x − 14| . 1and |y| . 1. The half angle of spread is about 4. The overall depthdistribution (Figure 11.9, right) is qualitatively similar to the 1D results(Figure 11.7). But in the case of 2D, the peak depth is at a larger valuedespite a smaller particle energy. This is caused by a smaller dissipationfactor f = 0.2 than 1

2in 1D. It may correspond to a smaller target density,

or a smaller particle charge, for instance.

Another difference is the asymmetry of the peak, which is much sharperon the front (forward) side. With a smaller f factor, the average energyof the particles near the final steps is smaller, such that the behavior ofthe stopping power at low energies (nuclear inelastic collisions) is moreimportant in influencing the shape of the peak. Nevertheless, the detailsonly change the quantitative results, and the qualitative features are thesame as in 1D particle transport.

As discussed earlier, we expect that the concentrated particle distribu-

Page 177: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.2. Particle transport 169

0 5 10 15 20Depth

0.0

0.2

0.4

0.6

0.8

1.0

Energy deposition (rel.)

Figure 11.10: The normalized energy deposition as a function of depth.

tion should lead to higher energy deposition in the vicinity of the peak.We show the energy deposition per unit length as a function of depth inFigure 11.10. It is calculated by tracking the energy loss at the point ofcollision within some interval [x, x+∆x], cumulated over all particles, andplotted as a histogram normalized to 1 at the peak. The energy deposi-tion increases with depth and reaches the peak value at precisely wherethe range distribution is highest. The increase is caused by the rise of thestopping power with decreasing energy above the maximum (Figure 11.6).The rate of increase is also affected by the dissipation factor f . Generally,a small f corresponds to a slower rise.

Near the peak, most particles have slowed down to around intermediateenergies with maximum stopping power. After the peak, the energy de-position falls quickly due to two factors, smaller particle energies and everdecreasing number of particles beyond the average depth. We see that thebehavior of energy deposition is determined to a large extent by the stop-ping power. Controlling the depth and shape of the energy deposition isvery important to applied and radiation physics. The typical shape of thecurve in Figure 11.10 is called the Bragg curve. We should bear in mind thatour simulation provides a basis for understanding the qualitative behaviorof particle transport. To be quantitatively accurate, we would have to userealistic data for the stopping power, often tabulated or parameterized forspecific systems.

Page 178: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

170 Chapter 11. Thermal systems

11.3 Bose-Einstein condensation

At low temperatures, the kinetic energy and momentum p of particles in anideal gas is small. The de Broglie wavelength, h/p (h = Planck’s constant)becomes large and comparable to inter-particle spacing. As a result, theparticles behave more like matter waves than classical particles. They canbe properly described only by quantum mechanics. We will discuss thebehavior of an ideal boson gas at low temperature, focusing on a criticalphenomenon known as the Bose-Einstein condensation.

Occupancy and density of states

In quantum statics, it is convenient to count the occupancy of quantumstates [32]. For bosons, the number of particles in a single-particle state ofenergy E obeys the Bose-Einstein distribution

n(E) =1

eβ(E−µ) − 1, (µ ≤ 0). (11.13)

The parameter µ has the dimension of energy and is called the chemicalpotential. It basically measures the ability of a system to exchange particles.It can be viewed in several different ways. For example, it may be regardedas the rate of change of energy when particles are added to the systemwhile keeping the entropy constant. For bosons, the chemical potential isnegative, µ ≤ 0, meaning that the energy of the system must decrease ifthe number of particles increases and if entropy is to remain constant.

Let us consider an ideal gas consisting of N particles confined in a cubeof volume V . From Eq. (11.13), the number of particles summed over alloccupied states should be equal to N ,

N =∑

all states

1

eβ(E−µ) − 1. (11.14)

The energy of a single particle is E = p2/2m. For systems of macroscopicsizes, the momentum p can be considered continuous, and the sum (11.14) isconverted to an integral. A useful trick for the conversion is to consider thenumber of states contained in the interval ~p to ~p+∆~p. To do so, we dividethe phase space into discrete states. According to the uncertainty principle,∆x∆px ∼ h, the number of states in phase space is equal to the phase space

Page 179: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.3. Bose-Einstein condensation 171

area divided by the h,∫dx∫dpx/h (see Eq. (9.4)). Generalizing it to 3D,

we have

all states

⇒ 1

h3

dxdydz

dpxdpydpz =4πV

h3

p2dp. (11.15)

We can define the density of states in momentum g(p), or in energy g(E)as

g(p) =4πV

h3p2, or g(E) =

2πV

h3(2m)3/2E1/2, (11.16)

such that∑

=∫g(p)dp =

∫g(E)dE.

Substituting Eq. (11.16) into (11.14), we obtain

N =

∫ ∞

0

1

eβ(E−µ) − 1g(E) dE =

2πV

h3(2m)3/2

∫ ∞

0

E1/2 dE

eβ(E−µ) − 1. (11.17)

Chemical potential and temperature

The integrand in Eq. (11.17) has an explicit temperature dependence β =1/kT , but it is clear that the integral must be temperature-independent toconserve the number of particles. This means that the chemical potential µmust depend on temperature in such a way that the integral is a constant.

Figure 11.11 (top) shows the integrand in Eq. (11.17) for a nonzeroµ/kT = −1. The area under the curve is proportional to the numberof particles. There is a single maximum at E/kT ∼ 0.37. For a giventemperature, the maximum value increases if the magnitude of the chemicalpotential |µ| is decreased. Conversely, for a given µ, the maximum valuedecreases if temperature is decreased.

The energy where the maximum value of g(Em)n(Em) occurs can besolved in terms of the Lambert W function (Section A:3.4) as (ExerciseE11.4)

Em = kT

[1

2+W

(

−12eβµ−

1

2

)]

. (11.18)

The maximum value of the integrand is approximately proportional tothe area. In order to conserve the area (particle number) while tempera-ture is decreased, the magnitude, |µ|, must decrease. We can substituteEq. (11.18) into the integrand to obtain the quantitative dependence of themaximum on |µ| and T . The results are shown in Figure 11.11 (bottom).To stay on a constant contour, the magnitude |µ| decreases with decreasing

Page 180: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

172 Chapter 11. Thermal systems

0 1 2 3 4 5E/kT

0.00

0.05

0.10

0.15

0.20

0.25

g(E)n(E

)

µ/kT=−1

0.0 0.5 1.0 1.5 2.0kT (arb.)

10-2

10-1

100

|µ| (arb

.)

Figure 11.11: The integrand in Eq. (11.17) (top) and the maximum value(bottom) as a function of |µ| and kT for a Bose gas.

temperature. We use the logarithmic scale for |µ| to show the behavior atsmall values. At low temperature, the vertical drop in µ becomes sharp.If temperature continues to decrease, there must be a critical point, Tc, atwhich the chemical potential is zero, µ = 0.

For convenience, we rewrite Eq. (11.17) in reduced units ε = βE,

N =2πV

h3(2mkT )3/2I, I =

∫ ∞

0

ε1/2 dε

eε−µ/kT − 1. (11.19)

We can find the condition for the critical temperature from Eq. (11.19) by

Page 181: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.3. Bose-Einstein condensation 173

setting µ(Tc) = 0 to obtain

N =2πV

h3(2mkTc)

3/2I0, I0 =

∫ ∞

0

ε1/2 dε

eε − 1= 2.31516. (11.20)

Solving for Tc, we have

kTc =h2

2m

(N

2πI0V

)2/3

. (11.21)

Assuming particle mass ∼ 100 amu, and 1000 particles trapped in a volumeof one cubic microns, the critical temperature is Tc ∼ 10−7 K.4

0 1 2 3 4T/Tc

−5

−4

−3

−2

−1

0

µ/kTc

Figure 11.12: Temperature dependence of the chemical potential of an idealBose gas.

The general relationship between the chemical potential and tempera-ture for T > Tc can only be found numerically. Combining Eqs. (11.19) and(11.21), we eliminate the constants other than Tc and I0 as

I0 =

(T

Tc

)3/2

I. (11.22)

4Cold temperatures on the order of nanokelvins can be achieved in laboratories viaatom traps.

Page 182: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

174 Chapter 11. Thermal systems

The integral I (11.19) is a function of µ. We can solve Eq. (11.22) by a rootfinder (Project P11.7). The results are displayed in Figure 11.12. As ex-pected, the chemical is negative above Tc, and its magnitude decreases withdecreasing temperature. Except for very close to Tc, the rate of decrease isfaster than linear.

Phase transition

Once crossing below Tc, the chemical potential is identically zero. What ifthe temperature is lowered further? Since µ cannot be positive for an idealBose gas,5 we have a dilemma on our hands: the integral I in Eq. (11.19)tops out at I0, but the factor T 3/2 will reduce the value of N . In otherwords, the number of particles seems to decrease. Where are the particlesdisappeared to?

The answer rests on the explanation to this critical phenomenon thathad been predicted by Bose and Einstein in 1920s and subsequently knownas the Bose-Einstein condensation (BEC). Only in 1990s had BEC beenobserved in dilute, weakly interacting, ultracold boson gases [1]. BEC pre-dicts that disappearing atoms are not lost, rather, they fall into the groundstate at E = 0. Evidently, the summation-to-integration conversion (11.15)is correct for all excited states excluding the ground state. This is becausethe density of states (11.16) vanishes precisely at the ground state energy,g(E = 0) = 0. The integral in Eq. (11.17) includes excited states E > 0only. To account for the excluded particles, the ground state and excitedstates should be counted separately.

Let Ng and Ne be the numbers of particles in the ground and excitedstates, respectively, such that the total number is N = Ng +Ne. Accordingto the BEC interpretation, condensation occurs at Tc. Above condensationtemperature, all (or practically all) particles are in the excited states. Oncebelow Tc, a condensate (Bose gas) is formed. It is a mixture of which groundstate is a significant part. In the condensate, Eq. (11.19) represents thenumber of particles, Ne, in the excited states only. Re-writing Eq. (11.19)

5Technically, the partition function for a Bose gas diverges unless the chemical po-tential is negative. Physically, it is not possible to increase the energy of the gas byadding particles and simultaneously keep the entropy constant.

Page 183: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.3. Bose-Einstein condensation 175

0.0 0.5 1.0 1.5 2.0T/Tc

0.0

0.2

0.4

0.6

0.8

1.0

RRg

Re

Figure 11.13: The fraction of particles in the ground state (Rg = Ng/N)and excited states (Re = Ne/N) in Bose-Einstein condensation.

in terms of Tc, we have

Ne =

N, T ≥ Tc,

N(

TTc

)3/2

, T < Tc.(11.23)

The number of particles in the ground state is

Ng =

∼ 0, T ≥ Tc,

N

[

1−(

TTc

)3/2]

, T < Tc.(11.24)

The ratios of the numbers of particles to the total number are shownin Figure 11.13. From the moment the condensate begins to form, atomsflow from the excited states to the ground state at the rate (T/Tc)

3/2. Thecross-over occurs at T/Tc = 2−2/3 ∼ 0.63. At T = 0, all atoms end up inthe ground state. It is a single, collective state in which atoms lose identity(very large de Broglie wavelength) in a full coherent condensate.

We have discussed how technically the atoms split between the groundand the excited states in a condensate. We may ask why a significantfraction of the atoms should not be in the ground state at any temperature,as the Boltzmann factor dictates. At higher temperatures, the number ofavailable single-particle states is large. Even though the ground state still

Page 184: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

176 Chapter 11. Thermal systems

has the largest Boltzmann factor, the number of excited states (the numberof possible particle arrangements) is so large compared to the number ofparticles that it overcomes the Boltzmann factor. As a result, most particlesare in the excited states. This is the case, for example, in a kinetic gasdescribed by the Maxwell speed distributions (see Section A:11.4.4).

At lower temperatures, the number of available single-particle states isreduced. A qualitative change occurs when it is smaller than the num-ber of particles. This severely limits the number of excited states, againstwhich the Boltzmann factor for the ground state becomes competitive. Theground state starts to be occupied significantly below condensation temper-ature. So, it all boils down to combinatorics and state-counting of identicalparticles.

If you find these arguments as to why BEC happens to be less than ad-equate or satisfying, you would be justified. Although we have technicallycharacterized BEC, a fuller understanding requires a more fundamentalquantum mechanical description of N -body systems, which is a more chal-lenging task than classical descriptions such as molecular dynamics. Evenin the mean field theory, we would have to tackle a nonlinear Schrodingerequation known as the Gross-Pitaevskii equation. Nevertheless, the exper-imental realization of BEC in a weakly interacting gas, long-regarded bysome as the holy grail in atomic physics, makes it no longer a nice exampleof mathematical tricks but an exotic state of matter a reality.

Chapter summary

We used simulated annealing to study the suspended chain and the trav-eling salesman problems by treating them as thermal relaxation processes.We presented Monte Carlo simulations of a many-body problem of dif-ferent kind, namely particle transport in 1D and 2D. We showed that thequalitative behavior in particle transport, including concentrated range dis-tribution and energy deposition, can be obtained by properly modeling thedisplacements, stopping power, and scattering cross sections in 2D. Finally,we explored the behavior of an ideal boson gas and the technical aspectsunderpinning the critical phenomenon, the Bose-Einstein condensation.

Page 185: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.4. Exercises and Projects 177

11.4 Exercises and Projects

Exercises

E11.1 (a) Expand the mean field magnetization (11.28) near the criticaltemperature (11.30), and show that

s = A

(

1− T

Tc

)1/2

.

Also determine A.

(b) Obtain the energy and heat capacity per spin in the mean fieldapproximation. Show that they are zero for T ≥ Tc. Derive the lowtemperature limit E(T → 0). Make sure to account for the fact thatthe energy is shared mutually between interacting pairs.

(c) Derive the mean field entropy, and plot it as a function of tem-perature. Comment on the behavior around Tc.

E11.2 Calculate the second virial coefficient B2(T ) from Eq. (A:11.43) forthe Lennard-Jones potential. First plot the integrand, and showthat for practical purposes the upper limit can be safely set to 3r0.Evaluate the integral using Gaussian integration.

Calculate and plot B2(T ) as a function of T from 10−2 − 102 on asemi-logarithmic T scale. Remember to increase T by a constantmultiplication factor (say 1.2). Comment on the results, and thebehavior as T 1.

E11.3 Find the position and height of the peak in the empirical formula forthe stopping power (11.9), and show that the height is proportionalto a and the position approximately inversely proportional to b.

E11.4 (a) Find the maximum of the integrand in Eq. (11.17), and showthat it occurs at the energy Em given by Eq. (11.18).

(b) Using the properties of the LambertW functions (Section A:3.4),prove the limiting values Em(T → 0) ∼ 1

2kT and Em(T →∞) ∼ −µ.

The derivative of the W may be useful for the latter, see ExerciseA:E3.10.

Page 186: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

178 Chapter 11. Thermal systems

(c) Plot the maximum value of the integrand in terms of µ and kTas a surface and a contour plots (Figure 11.11). Set the lower limiton |µ| to 10−2 or smaller. Plot them on linear scales first, followedby a semilog scale on |µ|. Discuss the difference.

Projects

P11.1 Study an antiferromagnetic system represented by the 2D Ising model.In antiferromagnetism, ε < 0, anti-parallel spins are preferred. Pre-dict what the microstates (Figure A:11.14) look like at low and hightemperatures.

(a) Modify Program A:11.6 or use codes developed elsewhere (e.g.,from Project A:P11.5) to simulate the antiferromagnetic system.Set N = 32, ε = −1. Generate the Ising maps analogous to Fig-ure A:11.14. Comment on the differences compared to the ferromag-netic system. Is there any indication of a phase transition? If so,between what temperature range?

(b) Calculate and plot the energy, magnetization, and heat capacityas a function of temperature. Let the system reach equilibrium overa few hundred passes, and sample your data over similar number oftimes. Does the magnetization support your conclusion regarding aphase transition from above? What about heat capacity?

Can your results be adequately explained in terms of the mean fieldapproximation?

(c) Calculate the so-called staggered magnetization. The idea is de-rived from the observation that at low temperature, the antiferro-magnetic map looks like a checker board, with up and down spinsalternating row- and column-wise (Figure 11.14).

Compute the staggered magnetization as

MS =∑

i,j

wi,jsi,j,

where wi,j = (−1)i+j is the weighting function. Plot 〈MS〉 (or |〈MS〉|)per spin. Explain your results. Collecting all evidence so far, is there,or is there not, a phase transition?

Page 187: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.4. Exercises and Projects 179

Finally, compare the numerical and analytic results from Section A:11.Bfor ε = −1.

kT=1.0Figure 11.14: The antiferromagnetic Ising model at low temperatures.

P11.2 Consider a 3D cubic lattice formed by stacking square lattices on topof each other at the same spacing. Let N be the linear dimension ofthe lattice. The number of spins in the system is N ×N ×N .

(a) Write a program to simulate the 3D Ising model. Store the cubein a 3D list, s(i, j, k) for speed. Assume nearest neighbor interactionas before. The six neighbors are left/right, up/down, and front/back.Modify ∆E from Eq. (A:11.26), assuming periodic boundary condi-tions.

Test your program with a small N = 16. After testing successfully,run your simulation with N = 32. Obtain the thermodynamic aver-ages including energy, magnetization, and heat capacity as a functionof temperature between 0.2− 6. Use small enough steps so the datapoints are reasonably dense and smooth other than thermal scatter.Before running longer jobs, find out typical equilibration times (seeFigure A:11.9) for the 3D model, particularly at low temperatures.

Plot your results, discuss them relative to the 2D case. Determinethe critical temperature, and compare with the mean field prediction.

Push the limit N as much as you can near the critical temperature.Investigate its dependence on the size of the lattice.

Page 188: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

180 Chapter 11. Thermal systems

Figure 11.15: The 3D Ising model.

(b) Create a visual representation of the cubic lattice with VPython.Represent spins as transparent spheres (with the opacity attribute)of two colors depending on their orientation (Figure 11.15). Simulatethe system just below and above the critical temperature. Afterequilibrium, examine the cubic lattice periodically. Compare theface domains with the 2D Ising model in Figure A:11.9.

Also examine the connectedness of large domains by turning off downspins (set the visible attributes to 0). Are there clusters connectingall six faces?

P11.3 The mean field approximation allows us to obtain analytic results ofthe Ising model easily and helps to explain the qualitative proper-ties. However, it is still interesting to study its numerical behaviorincluding rate of convergence.

Simulate the 2D Ising model using the mean field theory. Keep ev-erything the same as in Project A:P11.5, but calculate the change ofenergy for a trial flip as ∆E = 2nεs, where n = 4 and s is the aver-age spin of the system before the flip. Calculate the magnetizationand heat capacity as a function of temperature in [0.1, 6]. Comparethe sharpness of the phase transition and how quickly the resultsconverge with the exact simulation.

Page 189: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.4. Exercises and Projects 181

What is unusual about the heat capacity near the critical tempera-ture? Why?

Does hysteresis occur in the mean field approximation? Add a term2ηs to ∆E and run the simulation according to the descriptions inProject A:P11.5. Discuss and compare your results with those of theproject if available.

P11.4 Solve the traveling salesman problem by simulated annealing.

(a) Write a program to simulate an N -city system by Metropolissampling according to energy variation (11.4). Randomly generatethe positions of the cities over the unit square, and store the (x, y)coordinates in two separate lists.

In each iteration, randomly pick two cities to swap, excluding thestarting city and identical picks. Apply the Metropolis algorithm toaccept or reject the swap. Design a check to make sure the systemhas reached “thermal equilibrium” at a given temperature beforemoving on to a lower temperature. For instance, monitor the averagetotal distance over some (adequate) number of iterations, and moveon only when successive averages are within some allowed range.Decrease temperature by a factor, e.g., by half each time.

First simulate a small system, say N = 5, and start from T = 1.When the simulation works, increase N = 100. During the simula-tion, record the lowest distance sampled, and compare it with thefinal distance.

Use VPython to animate the relaxation process. Add some key de-tection statements so you can pause the program, lower or raise thetemperature by key presses. Visually inspect the routes to ensurethat your automatic equilibrium check works as designed.

(b) Modify the standard traveling salesman problem to add addi-tional features. For example, divide the domain into two halves,east and west, by a river. Each time the river is crossed, a toll or anincentive is effected. The energy (11.2) is modified as

E =∑

i

[(xi − xi+1)2 + (yi − yi+1)

2]1/2 + w(ri − ri+1)2,

Page 190: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

182 Chapter 11. Thermal systems

where w is a weighting factor, positive for tolls and negative forincentives. The region parameter ri denotes which side of the rivercity i is on, (1 east, −1 west).

Predict what effect this will have on route selection. Solve the prob-lem by setting w = 1, 0.5, and −1. Plot the equilibrium distances asa function of temperature.

P11.5 Study the behavior of range distributions and energy deposition in1D particle transport when different key parameters are used.

(a) The forward-backward asymmetry of the peak can be improvedas follows. Instead of taking the last position as the depth justbefore energy becomes negative, linearly extrapolate the step as∆x = E/S(E/2), and add ∆x to the current position. Modifydepth() in Program 11.2 and run it. Discuss the change in theresults.

If the dissipation factor f is reduced, how will the range distributionchange? Run Program 11.2 with f = 0.2, and comment on theresults.

(b) Calculate the depth as a function of energy E. At each E, pro-duce the depth profile, and assume the highest point as the depth.Put it in a separate function which returns the depth for a given E.Calculate the results for E = 1− 100. Also calculate the theoreticalaverage depth from Eq. (11.10). Plot both simulation and theoreticalresults. Discuss and compare them.

(c) Calculate energy deposition as a function of depth. Divide themaximum depth into a number of bins. Assume that the energy loss∆E is deposited locally at the current position, and add ∆E to theappropriate bin. Plot the results as a function of depth, and comparethem to the 2D case (Figure 11.10, qualitatively).

P11.6 Write a program to simulate particle transport in 2D. Implement thealgorithm described in Section 11.2.3, using Program 11.2 as a basictemplate.

Sample the scattering angle according to the cross section (11.12)by the rejection method (Section A:10.B). Since it is the overalldistribution that matters, we can ignore the prefactor and sample

Page 191: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.4. Exercises and Projects 183

only the angular dependence as

σ(θ) = α4

[

α2 + 8mE sin2 θ

2

]−2

.

We have dropped ~ which is absorbed by the units used. The extrafactor α4 ensures that the maximum of σ(θ) is 1, so the rejectionmethod may be applied readily. This should be a separate function.Test the sampled distribution to be sure it is correct.

(a) Run the program with the following parameters: E = 5,m = 100,a = 10, b = 5, f = 0.2, and α = 1. Check the individual paths toascertain their reasonableness. Once the program works, run it forsufficiently large N ∼ 5000 to calculate the range distribution andenergy deposition.

Predict how the results will change at lower or higher energies. Re-peat the calculations at E = 2 and 10.

(b) In a more realistic depiction, the mean free path depends onthe energy of the particle as well. Simulate this effect by assumingλ = λ0 + γE, where γ is a small positive constant. What are theeffects on the range distribution and the energy deposition? Run thesimulation with an energy-dependent mean free path (A:10.46) withλ0 = 1 and γ = 0.1. Discuss the results.

P11.7 Investigate Bose-Einstein condensation of an ideal Bose gas.

(a) Plot the integrand of I in Eq. (11.19) for different values of theratio µ/kT , e.g., from −2 to −0.2, for half a dozen curves or so. Howdoes the area change as the magnitude of the ratio decreases?

(b) Compute the values of I for the ratios above, using Gaussianintegration (Section A:8.A). Integrate over ε, choosing the upperlimit so that the integrals are converged (to within 1%). You mayneed to break up the range into subintervals if the upper limits arelarge. Test convergence with the known result I0 (11.20). Note theslow convergence.

Next, make a change of variable ε = x2. Perform the above numericalintegrations again. The convergence and accuracy should be muchimproved. Discuss the reason (see Eq. (A:8.79)).

Page 192: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

184 Chapter 11. Thermal systems

(c) Calculate the dependence of the chemical potential as a functionof temperature from Eq. (11.22). Define a function

f(µ) = I0 − I(µ),

where the µ dependence in I is made explicit. Solve the equationwith the bisection method (Section A:3.A). Use scaled units kTc andTc for the chemical potential and temperature, respectively. Evaluatethe integral I over the variable x in the range [0, 5] discussed above.

You will need to pass the value of µ from f(µ) to the integrandfor numerical integration, because the Gaussian integral expects oneargument. The easiest way is to declare µ a global variable in f(µ).For example, the function fmu() may be written as:

import integral as itg......

def fmu(y):global mumu = yreturn I0 − T∗∗(1.5)∗itg.gauss(fint, 0., 5.)

The variable mu is declared to be global so it can be changed to thevalue of the dummy variable y and be accessible to the integrandfunction, named fint in this case.

Calculate µ for T/Tc = 1.1 to 4, and plot the results (Figure 11.12).Compare the shape with the results of Exercise E11.4 if available, orwith Figure 11.11.

Explore the behavior of µ for T ∼ Tc (T/Tc = 1 to 32) and T/Tc 1

(up to 20). In each case, plot the numerical results in the respectiverange (use enough points to make the curves smooth). What function(e.g., power law) can adequate describe the data? Support yourargument with analytic work.

11.A Mean field approximation of 2D Ising

model

The coupling, or correlation, between the dipole interactions in the 2D Isingmodel makes it difficult to solve analytically. The mean field approximation

Page 193: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.A. Mean field approximation of 2D Ising model 185

simplifies the problem and helps us to understand the qualitative properties.Like the central field approximation (see Chapter A:4, Eq. (A:4.30)), wereplace the exact interactions by their averaged effects in the mean fieldapproximation.

Let n be the number of neighbors surrounding a central spin (Fig-ure A:11.13), and let s be the average spin of the neighbors (−1 ≤ s ≤ 1).The energy of the central spin is from Eq. (A:11.25)

E± = ∓nεs, (11.25)

where E± refer to the energy of a spin pointing up and down, respectively.Like the paramagnetic system (Exercise A:E11.6), the Boltzmann fac-

tors of this two-state system are

P (E±) =1

Zexp(−βE±), Z = exp(−βE+) + exp(−βE−), (11.26)

with Z being the partition function. We can obtain the average spin fromEqs. (11.25) and (11.26) as

〈s〉 = 1× P (E+) + (−1)× P (E−) = tanh(βnεs). (11.27)

Because all spins are equivalent in the mean field approximation, they musthave the same average value, 〈s〉 = s. Substituting this into Eq. (11.27),we have

s = tanh(βnεs). (11.28)

We can view Eq. (11.28) as defining the value of s for a given tempera-ture. Regardless of temperature, the value s = 0 is always a solution. Othernonzero solutions are not obvious because Eq. (11.28) is a transcendentalequation. An easier way is to not think of s as caused by temperature.Rather, we view Eq. (11.28) as defining the value of β (temperature T ) fora given s.6 We can solve for T analytically for nonzero s to obtain

kT/nε =2s

ln1 + s

1− s

. (11.29)

6This is analogous to viewing a thermometer not in terms of temperature causingthe expansion of mercury, but rather as the expanding mercury producing a particularreading of temperature.

Page 194: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

186 Chapter 11. Thermal systems

-1 0 1s

0

1

2

kT/nε Tc

stable

stable

stable

unstable

0 1 2kT/nε

-1

0

1

s

Tc

Figure 11.16: Temperature as a function of average spin (11.29) (left) andthe reversed graph (right) in the mean field approximation.

Figure 11.16 displays the solutions (11.29) graphically. Except for s = 0,there is a well defined temperature for a given s 6= 0. The solutions aresymmetric about s = 0. Starting from s = ±1, the temperature rises fromzero to a critical value, kTc/nε = 1, where the solutions converge to s = 0.On either side, the solutions are stable, because s 6= 0 corresponds to a lowerenergy. Below Tc, the s = 0 solution is unstable since a small fluctuationwill cause the system to collapse to either side with a finite s. Above Tc,only the s = 0 solution remains. It is stable in the sense that it is the onlysolution possible.

Turning the graph around (Figure 11.16, right), we see a phase transitionin s at the critical temperature Tc

kTc/ε = n. (mean field) (11.30)

Starting from s = 0 at higher temperatures, the value of s remains zerountil Tc, where it branches either up or down to become finite below Tc.

The mean field approximation predicts correctly the occurrence of aphase transition in the 2D Ising model. Quantitatively, setting n = 4 forfour nearest neighbors, we have kTc/ε = 4, which is nearly twice the exactvalue 2.27. We attribute the inaccuracy of the mean field approximationto the fact that the spin-spin correlation is neglected. It has been found

Page 195: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.B. Program listings and descriptions 187

that the agreement improves with increasing n for other Ising systems [15].Another difference is the critical behavior of s near Tc, which scales like (1−T/Tc)

1/2. The critical exponent is 12, compared to 1/8 from the exact result

(A:11.31). The mean field result gives a shallower rise of magnetizationbelow Tc.

Curiously, the mean field energy of the Ising model above Tc is zerono matter how high the temperature is. Furthermore, since the criticaltemperature depends on n only, the mean field approximation predicts aphase transition at Tc = 2 even for the 1D Ising model where there isnone. We regard both as artifacts due to the lack of correlation and ofconsideration for dimensionality of space.

The mean field analysis is equally applicable to antiferromagnetic sys-tems where ε < 0. In that case, s = 0 is the only solution for positivetemperatures. This turns out to be the correct solution (Project P11.1).

11.B Program listings and descriptions

Program listing 11.1: Thermal relaxation of a hanging chain (relaxtd.py)

import random as rnd, numpy as np2 import matplotlib.pyplot as plt

4 def update(N, x, y, d, kT, E):i , move = rnd.randint(1, N−2), 0

6 xt, yt = x[i]+rnd.gauss(0., sd), y[ i]+rnd.gauss(0., sd)dl = np.sqrt((xt−x[i−1])∗∗2 + (yt−y[i−1])∗∗2) # left/right dist

8 dr = np.sqrt((xt−x[i+1])∗∗2 + (yt−y[i+1])∗∗2)d2 = (dl−L)∗∗2 + (dr−L)∗∗2 − (d[i−1]−L)∗∗2 − (d[i]−L)∗∗2

10 dE = 0.5∗spring k∗d2 + mass∗g∗(yt − y[i]) # pot energiesif (dE < 0.0): move=1 # Metropolis algorithm

12 else:p = np.exp(−dE/kT)

14 if (rnd.random() < p): move=1if (move == 1):

16 E = E + dEx[ i ], y[ i ] = xt, yt

18 d[ i−1], d[ i ] = dl, drreturn E

Page 196: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

188 Chapter 11. Thermal systems

20

spring k, L = 20.0, 1.0 # spring const., relaxed length22 E, mass, g, N = 0., .1, 9.8, 41 # E, mass, g, num. of particles

sd, kT, Nmc = L/4., 10.0, N∗1000 # sd=Gaussian std. dev.24

x, y, d = [0.]∗N, [0.]∗N, [L]∗N # x,y pos., d=particle distance26 for i in range(N): x[ i ] = i∗L

28 plt . figure ()plt .plot ([0, x [−1]], [0,0], ’:*’, markersize=15) # end markers

30 for i in range(13):if ( i <= 4) or (i>4 and i%4==0): # thin out plots

32 plt .plot(x, y, ’-o’, label=repr(i) if i !=0 else ’’)for j in range(Nmc): # equilibrate

34 E = update(N, x, y, d, kT, E)kT = kT/2.

36

legend = plt.legend(loc=’lower right’)38 legend.draw frame(False), plt . axis(’off’)

plt .show()

The program simulates the relaxation of a free hanging chain by thermalrelaxation, or simulated annealing. The module update() uses Metropolissampling as explained in the text. The main code initializes the particlechain to the grid (xi, yi) = (iL, 0), for i = 0 − N , where L is the relaxedlength of the spring.

It then loops through temperature kT , halving it after each iteration.At a given kT , the system equilibrates over Nmc Monte Carlo samplings.The results are plotted in the next iteration.

Program listing 11.2: Particle transport in 1D (transport.py)

import matplotlib.pyplot as plt2 import numpy as np, random as rnd

4 def sp(E): # stopping powera, b, f = 10., 2., 0.5 # f=dissipation factor

6 return f∗(1.+a∗np.log(1.+b∗E)) / (1.+b∗E)

8 def depth(E, x=0.): # depth at E, default x=0

Page 197: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.B. Program listings and descriptions 189

while E > 0.:10 dx = −np.log(rnd.random())

E = E − sp(E)∗dx12 if (E > 0.): x += dx

return x14

N, E = 2000, 10. # num particles, beam energy16 nbin, x = 200, [] # num bins, ranges array

18 for i in range(N):x.append(depth(E))

20

plt . figure ()22 count, bins = np.histogram(x, nbin, (0., max(x)))

h = bins[1]−bins[0]24 plt .plot(bins [1:] − h/2 , count, ’o’)

plt . xlabel(’depth’), plt.ylabel(’count’)26 plt .show()

The range distribution for particle transport in 1D can be calculatedwith Program 11.2. It assumes the empirical stopping power (11.9) calcu-lated in sp(E). The function depth(), explained in text, follows one particleuntil it loses all its energy, and returns the depth over the cumulative dis-placements.

The main code calls depth() repeatedly for N particles. The rangehistogram and bin edges, count and bins respectively, are obtained via theNumPy function histogram(). The results are plotted at midpoints of thebins.

Program listing 11.3: N -body dynamics in Fortran (nbodyf.f)

1 c nbody() for F2Py, see mdf2py.pysubroutine nbody(id, r, v, t, a, n)

3 real∗8 r(n ,3), v(n ,3), t , a(n,3)real∗8 L, HL, rij (3), r2, r6, f , fij

5 integer id, n, i , j , kcommon /para/ L, HL

7 c F2Py directivesCF2Py intent(in) r

9 CF2Py intent(in) v

Page 198: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

190 Chapter 11. Thermal systems

CF2Py intent(out) a11 CF2Py integer intent(hide),depend(r) :: n=shape(r,0)

if (id .eq. 0) then

13 do i = 1, ndo k = 1, 3

15 a(i ,k) = v(i,k)enddo

17 enddo

else

19 do i = 1, ndo k = 1, 3

21 a(i ,k)=0.d0enddo

23 enddo

do i = 1, n25 do j = i+1, n

r2 = 0.d027 do k = 1, 3

rij (k) = r(i ,k)− r(j ,k)29 if ( rij (k) > HL) then

rij (k) = rij (k) − L31 elseif ( rij (k) < −HL) then

rij (k) = rij (k) + L33 endif

r2 = r2 + rij(k)∗ rij (k)35 enddo

r6 = r2∗r2∗r237 f = 12.d0∗(1.d0/r6 − 1.d0)/(r6∗r2)

do k = 1, 339 fij = f∗ rij (k)

a(i ,k) = a(i,k) + fij41 a(j ,k) = a(j,k) − fij

enddo

43 enddo

enddo

45 endif

return

47 end

49 c keep particle in box

Page 199: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.B. Program listings and descriptions 191

subroutine boxin(r, n)51 real∗8 r(n ,3), L, HL

integer n, i , k53 common /para/ L, HL

CF2Py intent(in,out) r55 CF2Py integer intent(hide),depend(r) :: n=shape(r,0)

do i = 1, n57 do k = 1, 3

if (r( i ,k) > L) then59 r( i ,k) = r(i ,k) − L

elseif (r( i ,k) < 0.d0) then61 r( i ,k) = r(i ,k) + L

endif

63 enddo

enddo

65 return

end

This is a Fortran code, written in F2Py extensions as part of NumPy,that computes the dynamics of an N -body system with Lennard-Jones po-tentials. The F2Py directives tell the compiler that r and v are input arrays,but a is an output array, so it is to be returned upon exiting the subroutine,and must be hidden (omitted) in the argument list by the caller. In effect,F2Py makes sure that array a will be created in the subroutine and returnedto the caller when finished, though in plain Fortran the subroutine wouldnot return anything. Also hidden is n, which derives its value from theshape of the input array. The common block can also be used to pass data.So this F2Py version of nbody() is equivalent to nbody() in Program 11.4for molecular dynamics simulations (Section A:11.4) in functionality and inform (admittedly less readable and 3 times longer than its Python cousin).However, it is much faster than the Python version after compilation, andsolves the bottleneck problem discussed in Section A:1.3.2.

To compile the code with F2Py so it is callable from Python, issue thefollowing command from the terminal (example for MinGW)

c:\python27\scripts\f2py.py −c −−fcompiler=gnu95 −−compiler=mingw32−m nbodyf nbodyf.f

It will generate a compiled module nbodyf.pyd, which can be importedinto Python. See Program 11.4 below.

Page 200: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

192 Chapter 11. Thermal systems

Program listing 11.4: Molecular dynamics with F2Py (mdf2py.py)

import ode, random as rnd, time2 import numpy as np, nbodyf # import compiled nbodyf

4 def nbody(id, r, v, t ): # N−body MDif (id == 0): # velocity

6 return va = np.zeros((N,3)) # acceleration

8 for i in range(N):rij = r[i]−r[ i+1:] # rij for all j>i

10 rij [ rij > HL] −= L # periodic bcrij [ rij < −HL] += L

12 r2 = np.sum(rij∗rij , axis=1) # |rij |ˆ2r6 = r2∗r2∗r2

14 for k in [0,1,2]: # L−J force in x,y,zfij = 12.∗(1. − r6)∗rij [:, k]/(r6∗r6∗r2)

16 a[ i ,k] += np.sum(fij)a[ i+1:,k] −= fij # 3rd law

18 return a

20 L, N = 10.0, 100 # cube size, num. atomsatoms, HL, t, h = [], L/2., 0., 0.002

22 r , v = np.zeros((N,3)), np.zeros((N,3))nbodyf.para.l , nbodyf.para.hl = L, HL # common block param.

24

rnd.seed(1234)26 for i in range(N): # initial pos, vel

for k in range(3):28 r [ i ,k] = L∗rnd.random()

v[ i ,k] = 1−2∗rnd.random()30 v −= np.sum(v, axis=0)/N # center of mass frame

32 t1 = time.time()if (1): # time F2Py code

34 for j in range(10000):r , v = ode.leapfrog(nbodyf.nbody, r, v, t , h) # use F2Py code

36 r = nbodyf.boxin(r)else:

38 for j in range(10000):r , v = ode.leapfrog(nbody, r, v, t , h)

Page 201: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

11.B. Program listings and descriptions 193

40 r [ r > L] −= L # periodic bcr [ r < 0.] += L

42

t2 = time.time()44 print (t2−t1)

Slightly modified from Program A:11.7, this program records the ex-ecution times when the core N -body dynamics is computed with eitherstandard Python or Fortran with F2Py extensions (Program 11.3). Datacan be passed between Python and named common blocks in F2Py (line 23).Note the contrast between Python and Fortran: a pair of concise statements(line 40) does the same job of subroutine boxin().

Because the calculation of forces among the particles is the most com-putationally intensive part of an N -body simulation, programming it in acompiled language should provide substantial speed boost. For N = 100atoms and 104 time steps, the Fortran code took ∼ 1.0 s, while standardPython took about 100 s (on an Intel i7-4600U 2.1 GHz). The speed gainis quite considerable, making the effort to interface Python with compiledlanguages worthwhile, even necessary if we are interested in large N or longintegration times. We note that, had we used explicit looping over index jin nbody(), the speedup factor would have been ∼ 1000.7

Though the example uses Fortran, we expect similar gains with C/C++using Cython or Weave (Section A:1.3.2). Interestingly, Numba did notimprove the performance in this case, presumably due to the already highlyoptimized Python code.

7For faster speed still, one should consider the fast multipole method (FMM) [16].By clustering the particles and expanding the potential in multipoles, the FMM canachieve N lnN , rather than N2, operations in the calculation of forces. Like the FFT,the speedup can be substantial for large N .

Page 202: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566
Page 203: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Chapter 12

Classical and quantumscattering

Owing to the scarcity of analytical solutions for even the simplest scatter-ing problems, computational modeling is nearly the de facto tool for actualscattering calculations. We discussed elastic potential scattering in Chap-ter A:12. Here, we focus on inelastic scattering (reactions), in both classicaland quantum simulations.

Approximation methods for low- and high-energy regimes are consid-ered, including determination of scattering lengths, and the Born and thesemiclassical WKB approximations. We generalize potential scattering tothe transition matrix for inelastic scattering, and present first order ap-proximations to excitation and ionization cross sections. Finally, we discussnon-perturbative, N -body classical trajectory Monte Carlo simulations foratomic reactions. We compare numerical results with experimental dataand discuss the validity of the models.

12.1 Orbiting

We start with classical orbiting at low energies before studying fuller quan-tum scattering just ahead. The fact that the deflection angle can exceed

195

Page 204: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

196 Chapter 12. Classical and quantum scattering

−2π (see Figure A:12.8) shows that full orbiting has occurred, i.e., theparticle has orbited more than a full 360 rotation around the target.

glory

rainbow

0 1 2 3 4 5r (a.u.)

0.0

0.1

0.2

Veff (a.u.)

b=43.5

3.1

2.9

2.8

2.7

bound, circular

quasi-circular

E=0.1

glory

Figure 12.1: Trajectories (top) and effective potentials (bottom) in scatter-ing from the Yukawa potential (same parameters as in Figure A:12.8).

To understand this behavior, we show in Figure 12.1 the trajectoriesand the effective potentials. The energy is the same as in Figure A:12.8,E = 0.1. The trajectories are simulated by integrating the classical equa-tions of motion with the leapfrog method (Project P12.1). The effective

Page 205: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.1. Orbiting 197

potential from Eq. (A:4.8) is plotted for different impact parameters (an-gular momenta).

Initially, decreasing impact parameters lead to more bending. The val-ues of the three largest impact parameters are the same for the trajectoriesand the effective potentials (Figure A:12.8, top and bottom, respectively),including the third one at b ≡ bg = 3.1, for the glory scattering. At bg,the deflection is such the particle is exactly back-scattered to 180, an abil-ity of the screened Coulomb potential that is apparently absent withoutscreening.

For b values below bg, the deflection goes over −π, and partial orbitingis starting to occur. The next two impact parameters (2.95 and 2.9) lead tofull orbiting near the rainbow angles, where the particle spends significanttime around the target in quasi-circular orbits (Figure 12.1, top).

We can find clues to orbiting in the effective potentials, Veff . From largeand decreasing b, the Veff curve is a monotonic function of r. Just pastbg at b = 2.9, the effective potential undergoes a transformation, it flattensout (circled in Figure 12.1) in the intermediate r region. This separates tworegimes of qualitative behavior in Veff : a structureless potential above anda potential well plus a barrier (hump) below.

On the flattened plateau, the effective radial force is close to zero (V ′eff '

0). The motion can be considered quasi-circular, or orbiting. True circularmotion always exists at the bottom of the well when the above condition issatisfied. It is bound and stable, as indicated for the effective potential atb = 2.7 in Figure 12.1 (see Section A:4.2.5, Figure A:4.5). But true, stablecircular motion cannot happen in scattering because the energy must beabove the barrier, so the particle cannot reach inside the well. Instead,when the impact parameter is such that a flattened top is formed, b ∼ 2.9in this case, the condition is created for an approximate, quasi-circularmotion. Because the particle has relatively significant angular momentumcompared to the small radial velocity (note the small scale in Figure 12.1),it can sweep a considerable angular range while moving radially towardand away from the turning point. Since the motion is ultimately unbound,the particle eventually leaves the interaction region, having gained a largedeflection angle while orbiting. Thus, we expect orbiting to occur only atlow energies.

We can also see that if the energy matches exactly the top of the barrier,such as those that exist for b = 2.7 and 2.8 in Figure 12.1 at r ∼ 2 and2.5, respectively, the radial velocity is exactly zero there, rendering the

Page 206: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

198 Chapter 12. Classical and quantum scattering

incoming particle in a state of perpetual orbiting at the radius correspondingto the top of the barrier. Though this type of circular motion is unstable, asubstantial deflection angle could be accumulated before the particle kicksback out if the condition is right (an extreme case of orbiting more thannine full revolutions is discussed in Project P12.1).

The range of impact parameters in which orbiting occurs is relativelynarrow. We can see this from the shape of the effective potential at b =2.8, which is qualitatively different already. For even lower b values, thepotential well becomes deeper and closer to the origin. Screening becomesless effective, and the deflection angle approaches −π as in the unscreenedCoulomb potential, albeit from below.

12.2 Green’s function method

Returning to quantum scattering, we discuss a formal method for the scat-tering amplitude f(θ) in Eq. (A:12.15), and obtain a first-order scatteringcross section from the screened Coulomb potential.

12.2.1 The integral equation

We begin with the Schrodinger equation in three dimensions,

− ~2

2m∇2ψ + V ψ = Eψ, (12.1)

where V is the scattering potential. We have dealt with differential equa-tions like (12.1) directly, but sometimes it is useful to covert them intointegral equations. This is the case here.

For integral solutions, it is convenient to rearrange Eq. (12.1) as

(∇2 + k2)ψ =2m

~2V ψ. (12.2)

Let us introduce an operator G = (∇2+k2)−1, so Eq. (12.2) can be writtenas G−1ψ = 2mV ψ/~2. Acting G on both sides, we can write down theformal solution to Eq. (12.2) as

ψ = eikz +2m

~2GV ψ. (12.3)

Page 207: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.2. Green’s function method 199

Because the extra term eikz is a solution to G−1eikz = 0, we see that theapplication of G−1 to both sides of Eq. (12.3) reduces it to Eq. (12.2).We can think of the first term in Eq. (12.3) as the general solution to the

homogeneous equation G−1ψ = 0 (taking ~k as the z direction), and thesecond term as a particular solution to the inhomogeneous equation (12.2).

We can also express the second term in Eq. (12.3) with the help of theGreen’s function G(~r, ~r′) defined as

G−1G ≡ (∇2 + k2)G(~r, ~r′) = δ(~r − ~r′), (12.4)

which yields the solution [2]

G(~r, ~r′) = − 1

eik|~r−~r′|

|~r − ~r′|. (12.5)

Equation (12.5) is the Green’s function for outgoing spherical waves (Exer-cise E12.2).1

For an arbitrary function g(~r), the Green’s function gives us a usefulrelationship

Gg(~r) =

G(~r, ~r′)g(~r′) d3r′, (12.6)

because acting G−1G = 1 to Eq. (12.6) yields

G−1Gg =

G−1G(~r, ~r′)︸ ︷︷ ︸

Eq. (12.4)

g(~r′) d3r′ =

δ(~r − ~r′)g(~r′) d3r′ = g. (12.7)

Using the identity (12.6) and (12.5), we can rewrite Eq. (12.3) as

ψ(~r) = eikz − m

2π~2

∫eik|~r−

~r′|

|~r − ~r′|V (r′)ψ(~r′) d3r′. (12.8)

This is the complete scattering wave function correct for outgoing waveboundary conditions (A:12.15). Note that the solution (12.8) is purelyformal, since it is expressed as an integral equation where ψ appears on the

1An equivalent Green’s function exists by replacing k → −k in Eq. (12.5), corre-sponding to incoming spherical waves.

Page 208: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

200 Chapter 12. Classical and quantum scattering

RHS under the integral. To obtain ψ, we would have to solve the integralequation, which is equivalent to solving the Schrodinger equation (12.1).2

For practical purposes, Eq. (12.8) is not particularly useful except forfew special potentials such as the Dirac delta potential. For formal ma-nipulations, however, it is very valuable and nearly indispensable in thedevelopment of scattering theory. For example, we can easily extract thescattering amplitude from Eq. (12.8). In the limit r →∞, we have

|~r − ~r′| ' r − r · ~r′ +O(1/r), andeik|~r−

~r′|

|~r − ~r′|' eikr

re−ikr·~r′. (12.9)

Substituting the approximation (12.9) into (12.8) and comparing with theasymptotic form Eq. (A:12.15), we identify the scattering amplitude as

f(θ) = − m

2π~2

e−ik cos θ rV (r)ψ(~r) d3r. (12.10)

In Eq. (12.10), we have used r · ~r′ = cos θ r′ where θ is the scattering angle(Figure A:12.10) and dropped the prime of the variable ~r′ in the integral. Weneed the knowledge of the full wave function ψ(~r) in all space to calculatethe scattering amplitude f(θ), from which we can obtain the cross sections.

12.2.2 The plane wave Born approximation (PWBA)

We can use the integral equation (12.8) to develop a perturbation series forapproximate evaluation of cross sections. Assuming the potential is weak,we can neglect the integral in Eq. (12.8) to obtain ψ ∼ ψ0 = eikz. Ineffect, we are assuming the incident wave is unaltered to zeroth order inthe perturbation V . Substitution of this approximate wave function intoEq. (12.10) yields

fB1(θ) = − m

2π~2

e−i(~kf−~ki)·~rV (r) d3r, (PWBA) (12.11)

where we have introduced ~ki = kz and ~kf = kr as the initial and final wavevectors.

2Equation (12.8) is called the Lippmann-Schwinger equation for potential scattering.The solution is sometimes denoted by ψ(+) to signify outgoing waves. Another function-ally equivalent solution is ψ(−) for incoming waves, obtained by replacing k → −k in theGreen’s function.

Page 209: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.2. Green’s function method 201

Equation (12.11) is the first order PWBA approximation to the scatter-ing amplitude, or simply the Born approximation. It is very simple to use,requiring a single evaluation of the integral when the potential is known(Fourier transform of V ). Due to its simplicity, it is also known as Fermi’sgolden rule.

0 30 60 90 120 150 180θ (deg)

0.0

0.2

0.4

0.6

0.8

1.0

σ/σ

0 E=1/4

E=1

E=4

Figure 12.2: The cross section of the Yukawa potential in the PWBA ap-proximation. The screening constant is a = 1.

For instance, for the Yukawa potential (A:12.13), the PWBA scatteringamplitude is (Exercise E12.3)

fB1(θ) =2mZ

~2

1

α2 + |~kf − ~ki|2, (12.12)

where α = 1/a is the screening constant. The differential scattering crosssection is

σ(θ) =

(2mZ

~2

)2 [

α2 +8mE

~2sin2 θ

2

]−2

. (12.13)

We have used the relation |~kf − ~ki| = 2k sin( θ2). This is the cross section

used in particle transport (Eq. (11.12), with Z → Z1Z2e2). For unscreened

Coulomb potentials, α = 0, and Eq. (12.13) reduces to the exact Rutherfordscattering cross section (A:12.9).3

3This is an accident, in much the same way that the classical and quantum mechanical

Page 210: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

202 Chapter 12. Classical and quantum scattering

The first Born cross section (12.13) is plotted in Figure 12.2 for threeenergies (in a.u.). The quantum cross section is finite at θ = 0, in sharpcontrast to the classical cross section which diverges. With increasing en-ergies, the cross section is strongly peaked at smaller angles due to theinteraction becoming relatively weaker. The PWBA approximation shouldbe valid from intermediate to high energies.

The total cross section is

σt =

∫ π

0

σ(θ)2π sin θ dθ = π

(4mZa2

~2

)2 [

1 +8mEa2

~2

]−1

. (12.14)

For large screening length a, the potential becomes the unscreened Coulombpotential, and σt diverges as a → ∞, the same as the Rutherford crosssection. At asymptotically high energies E 1, the total cross sectionscales as E−1.

In the next order correction, we use the zeroth order wave function ψ0

in Eq. (12.8) to obtain ψ1, correct to first order in V

ψ ∼ ψ1 = eikz − m

2π~2

∫eik|~r−

~r′|

|~r − ~r′|V (r′)ψ0(~r′) d

3r′

= eikz − m

2π~2

∫eik|~r−

~r′|

|~r − ~r′|V (r′)eikz

d3r′. (12.15)

Substitution of Eq. (12.15) into (12.10) yields the scattering amplitudewhich is second order in V , and is known as the second Born approxi-mation. We could go on to the next order in the series to obtain the thirdBorn approximation, etc. However, it becomes impractical very quickly toevaluate the higher order amplitudes (Exercise E12.4). For potential scat-tering, it is easier to find alternative means such as partial wave expansiondiscussed in Section A:12.4.

12.3 Scattering at low and high energies

The study of scattering in the low- and high-energy regimes provides aninteresting and contrasting picture. At low energies, quantum effects are on

results are identical for Rutherford scattering. This accidental agreement is generallyattributed to the unusual symmetry of the Coulomb potential.

Page 211: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.3. Scattering at low and high energies 203

full display as classical descriptions fail completely. At the high-energy end,scattering affords us the opportunity to understand the connection betweenquantum descriptions and classical, semiclassical, and other approximationtheories.

12.3.1 Scattering length

To begin, it is more convenient to express the phase shift (A:12.35) in aform explicitly containing the wave function in the full r range instead ofonly at the matching point. This can be done by partial wave analysis of thescattering amplitude (12.10). Following partial wave analysis (Eq. (12.85),Section 12.A), we obtain the phase shift as an integral

eiδl sin δl = −2mk

~2

∫ ∞

0

jl(kr)V (r)Rl(k, r)r2 dr. (12.16)

Equation (12.16) is an alternative expression equivalent to Eq. (A:12.35).In principle, once the wave function Rl is known, we can calculate thephase shift from Eq. (12.16) by carrying out the integral. The result wouldbe identical to that from Eq. (A:12.35). There is little to be gained inpractice doing things this way, as the latter is simpler. But, like the integralLippmann-Schwinger equation (12.8), Eq. (12.16) is important for obtainingseveral formal properties in scattering.

At low energies, and because jl(kr) ∼ (kr)l for small k in Eq. (A:12.58),the integral (12.16) vanishes unless l = 0. Therefore, we conclude that ask → 0, sin δl 6=0/k ∼ 0, and sin δ0/k ∼ −a0 where a0 is some constant. Thisconstant is defined as the scattering length [8, 25]. From Eq. (12.16), wehave

a0 = − limk→0

sin δ0k

=2m

~2limk→0

e−iδ0

∫ ∞

0

j0(kr)V (r)R0(k, r)r2 dr. (12.17)

Despite the phase factor exp(−iδ0) in Eq. (12.17), the scattering lengtha0 is real because there is an opposite phase factor in R0(k, r) from thenormalization (A:12.28a).4 For example, the phase shift for the hard sphereis δ0 = −ka (Figure A:12.13), so the scattering length is a0 = a, the radiusof the sphere. For the Yukawa potential (Figure A:12.18), the value isa0 ∼ −0.7, slightly less than the screening length (a = 1) in magnitude.

4Sometimes the scattering length is defined as a0 = − tan δ0/k in another commonconvention in place of Eq. (12.17). In that case, the wave function Rl is normalized dif-ferently than our convention (A:12.28a). They are related by Rl = Rl exp(−iδl)/ cos δl.

Page 212: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

204 Chapter 12. Classical and quantum scattering

0.0 0.5 1.0 1.5 2.0a (a.u.)

−5

0

5

10

15

a0 (a.u.)

1 2

10-3 10-2 10-1 100 101

k (a.u.)

0

π/2

π

δ 0

a=1.5

0.45

0.42

0.4

Figure 12.3: Top: the scattering length in the Yukawa potential with fixednuclear charge (Z = 2) and varying screening length a. The vertical linesindicate the thresholds for one and two bound states. Bottom: the s-wavephase shift at specified a.

For repulsive potentials, δ0(k → 0) −→ 0− (i.e., δ0 approaching zerofrom below), so the scattering length is always positive. For attractivepotentials, however, the sign depends on the number of bound states nb,and whether δ0 approaches nbπ (Eq. (A:12.23)) from above or below. Itcan be positive or negative. Only if nb = 0 is the sign of scattering lengthcertainly negative.

Page 213: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.3. Scattering at low and high energies 205

Figure 12.3 (top) shows the scattering length as a function of the screen-ing parameter for the Yukawa potential. For a < 0.42, the potential is toonarrow to support any bound states, so the s-wave (l = 0) phase shift con-verges to zero (Figure 12.3, bottom), in accord with Levinson’s theorem(A:12.23). The scattering length is negative, but finite.

A singularity occurs around a ∼ 0.42 where the value plunges to a0 →−∞ (at a = 0.42005 to be precise). This happens because the potential isat a critical point: it is on the verge of being able to support its first boundstate. At this critical point, the phase shift gets stuck at π/2, half-waybetween zero and the first bound states, giving rise to an infinite scatteringlength, and an infinite cross section (12.19).5

This effect always happens when the potential is at the critical pointsbetween supporting nb and nb + 1 bound states. In such a case, δ0(k →0) −→ (nb +

12)π, and the scattering length is singular, a0 = ±∞. For

the Yukawa potential, the next critical point is at a ∼ 1.65 (Figure 12.3,top). However, the value of a0 approaches +∞. The change of sign occursbecause δ0 rapidly converges to π after the first critical point. Increasing afurther, δ0 can exceed π for some low but finite k values, eventually fallingback to π as k → 0 from above (Figure 12.3, bottom, a = 1.5), since thepotential can support only one bound state. With the cross-over, sin δ0changes sign, and so does the scattering length (12.17). The cycle repeatsafter each critical point.

Another effect can occur near the cross-over at π (Figure 12.3, bottom,a = 1.5). If the energy is low compared to the potential, only a couple of lowpartial waves contribute (l = 0 or 1), with the s-wave being predominant.However, the partial cross section σ0 ∼ sin2 δ0 decreases to zero from bothdirections toward the cross-over. As a result, the total cross section willshow a rapid drop. The reduction in the cross section is known as theRamsauer-Townsend effect. Exploration of this effect is described in ProjectA:P12.8.

For short-ranged potentials, we can regard j0 and R0 as constant in theneighborhood of small r, and approximate the scattering length as

a0 '2m

~2C

∫ ∞

0

V (r)r2 dr, (12.18)

where C = e−iδ0(0)R0(0, 0).5This is purely a quantum mechanical effect, and has no classical equivalent. Some-

times it is referred to as zero-energy resonance.

Page 214: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

206 Chapter 12. Classical and quantum scattering

In the low-energy regime, the scattering amplitude (A:12.31) becomesf = − exp(iδ)a0, and the differential cross section is isotropic, σ(θ) = |f |2 =a20. The total cross section is

σt = 4πa20. (zero energy limit) (12.19)

Low-energy collisions are uniquely characterized by a single parameter: thescattering length. It is an important parameter for systems such as coldatom collisions in Bose-Einstein condensation (Section 11.3) where the tem-perature is so low that the wave vector k is so small as to be practicallyzero.

12.3.2 Born and WKB phase shifts

At higher energies, we expect perturbative methods to be valid. We considertwo cases, the Born approximation and the WKB method.

Born approximation

In the first Born approximation, we assume the incident wave function isunperturbed to zeroth order in the interaction, the same assumption madein PWBA (12.11). We can substitute Rl ∼ jl(kr) in Eq. (12.16), and find

eiδl sin δl = −2mk

~2Il, Il =

∫ ∞

0

j2l (kr)V (r)r2 dr. (12.20)

Since the Born approximation is valid when δl is small, we can drop thephase factor and simplify Eq. (12.20) to read

δl ∼ −2mk

~2Il. (Born phase shift) (12.21)

Obtaining the Born phase shift is a simple matter of evaluating the integral.To find the scattering amplitude in the Born approximation, we substi-

tute Eq. (12.20) into (A:12.31), obtaining

fB1(θ) = −2m~2

∞∑

l=0

(2l + 1)IlPl(cos θ). (12.22)

Equation (12.22) may look different than the PWBA amplitude (12.11),but is in fact equivalent to it. It is just an expansion in partial waves.

Page 215: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.3. Scattering at low and high energies 207

Many partial waves contribute to scattering at high energies. Equa-tions (12.21) and (12.22) provide an efficient and simple means for calculat-ing phase shifts and the scattering amplitude. Of course, the results shouldbe small for the approximation to be valid.

WKB phase shifts

An important bridge between quantum and classical worlds is furnished bythe WKB method [17]. On the one hand, the WKB method still describesparticles as waves. On the other hand, it assumes that the de Brogliewavelength is small compared to the range of the potential, such that theidea of trajectories is also applicable.

The WKB method is most readily applicable to the radial equation(A:12.36), an effective 1D Schrodinger equation with the effective potential,with one important modification: the angular momentum squared, L2, inthe centrifugal potential should be replaced by

L2 = l2c~2, lc = l +

1

2, (12.23)

such that

Veff (r) = V (r) +l2c~

2

2mer2. (WKB semiclassical potential) (12.24)

The modification amounts to an averaged L2 in the classical interpretation.It ensures that the WKB wave function has the correct limit Rl ∼ rl forsmall r [25]. The replacement affects mostly low angular momentum states,for the difference l2c − l(l + 1) = 1

4becomes negligible for larger l.

Let wl ≡ uWKBl denote the WKB solutions to Eq. (A:12.36). It is given

by

wl(r) =A√psin

(1

~

∫ r

rmin

p dr + φ

)

, p =√

2me(E − Veff), (12.25)

where A is a normalization constant, rmin the turning point, and φ a con-stant phase. For scattering, the motion is restricted to the classically al-lowed region, where the classical momentum p is real. It can be shown thatwl satisfies Eq. (A:12.36) to first order in ~.

The phase φ is usually found by the connection formula, a procedurefor matching the solutions at the turning point between classically allowed

Page 216: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

208 Chapter 12. Classical and quantum scattering

and forbidden regions. For scattering states we are interested in, we candetermine φ by comparing the asymptotic wl with the free-particle solutions.The value is found in the Appendix (Section 12.C), φ = π/4.

Substituting φ into Eq. (12.25), the wave function in the WKB approx-imation is

wl(r) =A√psin(

Φ(r) +π

4

)

,

Φ(r) = k

∫ r

rmin

[

1− V (r)

E− (l + 1

2)2

k2r2

] 1

2

dr. (12.26)

When V is nonzero, the WKB wave function must approach sin(kr −lπ/2+ δWKB

l ) at large distances. Therefore, we obtain the WKB phase shiftas

δWKBl = lim

r→∞[Φ(r)− kr] +

(

l +1

2

2. (12.27)

Because δWKBl = 0 for V = 0 (see Eq. (12.94)), the WKB result shows

more clearly that the phase shift is positive if V < 0 (larger Φ, (12.26)) andnegative if V > 0.

The WKB phase shift is related to the classical deflection function in thelimit of large l [4]. Assuming a continuous variable l, we can differentiateEq. (12.27) with respect to l and obtain6

∂δWKBl

∂l=

1

2π −

∫ ∞

rmin

lckdr

r2√

1− V (r)E− l2c

k2r2

=1

2Θ. (12.28)

With the classical equivalents lc = L/~ = mvb/~ and k = mv/~, we havelc/k = b, the impact parameter. Substituting this into Eq. (12.28), the in-tegral in the middle term becomes identical to the integral in Eq. (A:12.3).Through the relationship (12.28), we see another manifestation of the semi-classical nature of the WKB approximation.

Numerically, it is more efficient to calculate δWKBl by subtracting the

phase of a free-particle Φ0(r),

δWKBl = lim

r→∞[Φ(r)− Φ0(r)]. (12.29)

6The lower limit rmin is an implicit function of l also, and should be taken intoaccount when differentiating. It does not contribute in this case because the integrandin Eq. (12.26) vanishes at rmin.

Page 217: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.3. Scattering at low and high energies 209

The expression for Φ0(r) is given by Eq. (12.93), and its asymptotic valueby Eq. (12.94). For large l values, the convergence of Eq. (12.27) is poorbecause the upper r limit can be very large. Equation (12.29) is considerablybetter. Of course, we must use a sufficiently large r to get accurate results.

0 5 10 15 20l

10-5

10-4

10-3

10-2

10-1

100

101

δ l

E=2

E=10

Exact

Born

WKB

0 5 10 15 20l

10-6

10-5

10-4

10-3

10-2

10-1

100

Relative error

2

10

E= 2

10

Born

WKB

Figure 12.4: Left: phase shifts at two energies (E = 2 and 10) for theYukawa potential (Z = 2, a = 1) as a function of partial waves by severalmethods including the Born and WKB approximations. Right: the relativeerror of the Born and WKB approximations.

Figure 12.4 shows the phase shifts for the Yukawa potential at the in-termediate and high energies E = 2 and 10 a.u., respectively. The exactresults are obtained from numerical integration of the radial Schrodingerequation equation (A:12.36) by Program A:12.2. We also show the resultsfrom the Born and WKB approximations. The Born results are calculatedby numerical integration of Eq. (12.21), and the WKB results from (12.26)which is then substituted into Eq. (12.29). In both cases, we have used theGaussian integration method. Other details are given in Project P12.4.

The phase shift is the largest at l = 0 and decreases steadily. After a fewsteps in l, it can be adequately described by a global scaling, δl ∝ exp(−l/k),indicated by the trend lines in Figure 12.4. On the semilog scale, the slopeis shallower for higher energies. Therefore, the number of important partialwaves lmax should increase with increasing energies. We expect lmax to beon the order of ka.

Over the order-of-magnitude scale in Figure 12.4, it is difficult to seethe differences between the exact and the approximate results. To compare

Page 218: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

210 Chapter 12. Classical and quantum scattering

Table 12.1: Comparison of phase shifts for scattering from the Yukawapotential by the Born and WKB approximations with the exact (numerical)results. The potential parameters are the same as in Figure 12.4.

E = 2 E = 10l Born WKB Exact Born WKB Exact0 1.4166e+0 1.6531e+0 1.6177e+0 9.8262e-1 1.0621e+0 1.0225e+01 5.9368e-1 7.2749e-1 7.0277e-1 5.5998e-1 5.9754e-1 5.8862e-12 2.9353e-1 3.3308e-1 3.2677e-1 3.6965e-1 3.8891e-1 3.8546e-13 1.5459e-1 1.6609e-1 1.6491e-1 2.5817e-1 2.6867e-1 2.6703e-14 8.4200e-2 8.7429e-2 8.7546e-2 1.8586e-1 1.9177e-1 1.9093e-15 4.6831e-2 4.7562e-2 4.7951e-2 1.3637e-1 1.3977e-1 1.3933e-16 2.6424e-2 2.6429e-2 2.6807e-2 1.0138e-1 1.0335e-1 1.0313e-17 1.5065e-2 1.4900e-2 1.5198e-2 7.6096e-2 7.7249e-2 7.7142e-28 8.6583e-3 8.4878e-3 8.7051e-3 5.7539e-2 5.8207e-2 5.8169e-29 5.0071e-3 4.8723e-3 5.0238e-3 4.3761e-2 4.4143e-2 4.4143e-210 2.9103e-3 2.8134e-3 2.9162e-3 3.3440e-2 3.3652e-2 3.3673e-215 2.0318e-4 1.9102e-4 2.0322e-4 9.1412e-3 9.1313e-3 9.1624e-320 1.4932e-5 1.3681e-5 1.4934e-5 2.6238e-3 2.6104e-3 2.6261e-3

them more quantitatively, we list the data in Table 12.1. At the same time,we also show the relative error in Figure 12.4 (right). The Born results haverelatively large errors at the intermediate energy for smaller partial waves.They are improved quickly for larger l values and at the higher energies asexpected. We can view either of these conditions, a larger impact parameterand energy, as effectively reducing the perturbation, for which the validityof the Born approximation is enhanced.

The WKB approximation is acceptable for a broader range of energies.Its accuracy is satisfactory even at the intermediate energy, which alsoimproves as energy becomes higher. Unlike the Born approximation, theWKB results do not become more accurate for higher l values. Exceptat some value of l where it crosses the exact results, the relative error isrelatively flat. We understand this behavior as due to the non-perturbativenature of the WKB approximation. Its validity does not depend on thestrength of the perturbation. Rather, it depends on the wavelength beingsmall compared to the range of the potential.

Page 219: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.4. Inelastic scattering and atomic reactions 211

Overall, for higher energy or higher l values, the Born approximationis the method of choice for calculating the phase shift. For intermediateenergies and very high l values, the WKB method is the most robust.

12.4 Inelastic scattering and atomic

reactions

In potential scattering discussed so far, we assume the scattering center,the target, is structureless and not actively participating in the dynamicinteraction other than being the center of force. When the target doeshave structure, such as when particles collide with atoms, the target can bean active participant in the scattering process, absorbing energy throughatomic reactions including excitation, ionization, or even electron transfer(capture) [4]. The study of atomic reactions is vital to the fundamentalunderstanding of these processes as well as to many practical applicationssuch as particle transport or the scattering and absorption of radiation bythe atmosphere.

12.4.1 The T -matrix and inelastic cross sections

We can represent an atomic reaction by

P + A −→ P + A∗, (12.30)

where P and A stand for the projectile and the target atom, respectively,and A∗ denotes the state of the atom, possibly excited or ionized, afterthe collision is over. We assume the systems are prepared in a well-definedinitial state in the entrance channel. The systems can undergo transitionsto any allowed final state in the exit channel as

ei~ki·~rφi(~r′) −→ ei

~kf ·~rφf(~r′), (12.31)

where exp(i~ki,f · ~r) describes the plane wave states of the incoming and

outgoing projectile. The wave function φi,f(~r′) describes initial and finalatomic states.

Slightly generalizing from Eq. (12.10), we can find the scattering ampli-tude as

f(θ) = − m

2π~2

∫ [

ei~kf ·~rφf(~r′)

]∗V (~r, ~r′)ψi(~r, ~r′) d

3rd3r′. (12.32)

Page 220: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

212 Chapter 12. Classical and quantum scattering

Like in Eq. (12.11), we have replaced exp(−ik cos θr) by exp(−i~kf ·~r). Thewave function ψi(~r, ~r′) is the exact wave function of the collision system cor-responding to the initial state (12.31). The potential V (~r, ~r′) now includesall interactions between the projectile and the target atom.

The final kf value is different than the initial ki value in inelastic scat-tering, but they are related by energy conservation

~2k2i2m

+ Ei =~2k2f2m

+ Ef , (12.33)

where Ei and Ef are the respective energies of the initial and final atomicstates (12.31). To find the scattering cross section, we also need to take intoaccount the incoming and outgoing radial fluxes, I and Ir in Eqs. (A:12.16)and (A:12.17) respectively, being different in inelastic scattering. This leadsto the modified fluxes as

I =~kim, Ir =

~kfm|f(θ)|2r2 = kf

kiI|f(θ)|2r2

. (12.34)

We can obtain the inelastic scattering cross section from Eq. (A:12.19)as

σfi(θ) =kfki|f |2 = m2kf

4π2~4ki|Tfi|2, (12.35)

where the transition matrix element Tfi, known as the T -matrix, is givenby

Tfi =

e−i~kf ·~rφ∗f(~r

′)V (~r, ~r′)ψi(~r, ~r′) d3rd3r′. (12.36)

Determination of the T -matrix becomes the central task in scattering the-ories [21].

In an inelastic scattering experiment, the incoming projectile with mo-mentum ~~ki is scattered to the final momentum ~~kf , with a change in bothdirection and magnitude. It is useful for us to introduce the momentumtransfer defined as (up to a factor ~)

~q = ~kf − ~ki. (12.37)

In atomic reactions, both energy and momentum transfers are importantfactors in determining the cross section. We can view the cross section(12.35) as being dependent on θ or on q via Eq. (12.37).

Page 221: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.4. Inelastic scattering and atomic reactions 213

The total cross section may be obtained by integrating over all scatteringangles,

σfi =

∫ π

0

σfi(θ)2π sin θ dθ. (12.38)

The integral over θ can be more conveniently expressed as an integral overthe momentum transfer q. From Eq. (12.37), we have

q2 = k2f − 2kikf cos θ + k2i , (12.39)

such that sin θdθ = qdq/kikf . Substituting this relationship and Eq. (12.35)into (12.38), we find

σfi =m2

2π~4k2i

∫ qmax

qmin

|Tfi(q)|2q dq, (12.40)

where the minimum and maximum momentum transfers are given by

qmin = |ki − kf |, qmax = ki + kf . (12.41)

Typically, Tfi(q) decreases rapidly for increasing q, so most of the contri-bution to the total cross section comes from small momentum transfersq ∼ qmin.

To determine the cross section, we need to calculate the T -matrix whichdepends on the full wave function, ψi(~r, ~r′). The latter involves at least athree-body system, whose exact solutions prove to be elusive and not knownat present.

The T -matrix may be obtained numerically via the time-dependent ap-proaches discussed in Chapter 8 such as the coupled channel method. Herewe are interested in several alternative methods including the perturba-tive Born series similar to Section 12.2.2, as well as the classical trajectoryMonte Carlo method (Section 12.5).

12.4.2 Born cross section for excitation

In the Born approximation, we substitute ψi(~r, ~r′) by the unperturbed ini-tial wave function (12.31) in (12.36) to obtain

TB1fi =

e−i~kf ·~rφ∗f(~r

′)V (~r, ~r′)ei~ki·~rφi(~r′) d

3rd3r′

=

e−i~q·~rφ∗f(~r

′)V (~r, ~r′)φi(~r′) d3rd3r′. (12.42)

Page 222: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

214 Chapter 12. Classical and quantum scattering

To be concrete, let us consider the excitation of the hydrogen atomby charged particle impact. Let ZP be the charge of the projectile. Theinteraction potential between the projectile and the hydrogen atom can bewritten as

V (~r, ~r′) = ZP

(1

r− 1

|~r − ~r′|

)

. (12.43)

The first and second terms in Eq. (12.43) are the projectile-nucleus andprojectile-electron interactions, respectively.7

Using Eq. (12.43), we can carry out the integration over r in Eq. (12.42)to reduced it to (see Section 12.D)

TB1fi (q) =

4πZP

q2[δfi − Ffi(q)], Ffi(q) =

φ∗f(~r

′)e−i~q·~r′φi(~r′) d3r′. (12.44)

The quantity Ffi(q) is called the atomic form factor [25].As an example, let us consider excitation from the ground state to 2s

and 2pm states (m = 0,±1). The atomic form factors, F2s,1s and F2pm,1s

are given by Eqs. (12.109a) and (12.109b), respectively. Substituting theminto Eq. (12.44), we find the T -matrix for excitation to be

TB12s,1s(q) = 16

√2πZP

a20(94+ q2a20)

3, (12.45a)

TB12pm,1s(q) = i

16√6π3/2ZPa0

q(94+ q2a20)

3Y ∗1,m(θq, ϕq). (12.45b)

The total 1s→ 2pm excitation cross section contains contributions fromall possible m substates. Summing over m with the help of Eq. (12.78), weobtain

|TB12p,1s(q)|2 =

1∑

m=−1

|TB12pm,1s(q)|2

= 329π3Z2

Pa20

q2(94+ q2a20)

6

1∑

m=−1

Y ∗1,m(θq, ϕq)Y1,m(θq, ϕq)

︸ ︷︷ ︸

3/4π

= 927π2Z2

Pa20

q2(94+ q2a20)

6. (12.46)

7We use ZP for simplicity. In the SI units, it should be replaced by ZP e2/4πε0. No

change is necessary in atomic units.

Page 223: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.4. Inelastic scattering and atomic reactions 215

Substituting Eqs. (12.45a) and (12.46) into (12.40), we obtain the totalexcitation cross sections as

σB12s,1s =

256πm2Z2Pa

40

~4k2i

∫ qmax

qmin

q

(94+ q2a20)

6dq, (1s→ 2s) (12.47a)

σB12p,1s =

576πm2Z2Pa

20

~4k2i

∫ qmax

qmin

1

(94+ q2a20)

6

dq

q. (1s→ 2p) (12.47b)

The first thing we note from these cross sections is that they depend on thesquare of the projectile charge, Z2

P . To the extent the Born approximationis valid, it means that the cross sections are the same whether the pro-jectile is positively or negatively charged. For instance, the cross sectionsby protons, electrons, or antiprotons are equal in the Born approximation.This has been confirmed by experiments in high energy (but nonrelativistic)collisions.

To evaluate Eqs. (12.47a) and (12.47b), we need to know the integra-tion limits qmin and qmax, which can be determined from Eqs. (12.33) and(12.41). Introducing ε = Ef −Ei as the energy transfer, Eq. (12.33) can bewritten as

k2i − k2f =2mε

~2, or ki − kf =

2mε

~2(ki + kf). (12.48)

In fast collisions, the energy transfer ε is much less than the projectileenergy, so mε/~2ki ki, and ε is negligibly small.8 For example, the valueof ε is on the order of a few eV, and the collision energy is ∼ 100 keV forprotons. In such cases, we can approximate kf ∼ ki in Eq. (12.48) suchthat the following holds,

ki − kf '2mε

~2(ki + ki)=

~2ki. (12.49)

Therefore, the limits of momentum transfers are

qmin =mε

~2ki=

ε

~v, qmax = 2ki, (12.50)

where we have used v = ~ki/m =√

2E/m, the collision speed.The integrands in Eqs. (12.47a) and (12.47b) falls off rapidly with in-

creasing q, indicating that most contributions come from soft collisions with

8This is true for impact by heavy particles such as protons, even in slow collisions.

Page 224: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

216 Chapter 12. Classical and quantum scattering

small momentum transfer near qmin. For practical purposes, we can regardqmax → ∞. Accordingly, we set the upper limits in Eqs. (12.47a) and(12.47b) to infinity.

The 1s → 2s integral (12.47a) can be performed analytically with avariable substitution q2 = x. It turns out Eq. (12.47a) can also be integratedanalytically with a change of variable and a decomposition (Project P12.6).The results are

σB12s,1s =

27πZ2Pa

20

5~2v2

[9

4+ q2mina

20

]−5

, (12.51)

σB12p,1s =

217πZ2Pa

20

310~2v2

[

ln1 + β

β−

5∑

j=1

1

j(1 + β)j

]

, β =4

9a20q

2min. (12.52)

At high collision energies E → ∞, the minimum momentum transferbecomes very small, qmin ∝ 1/

√E → 0. Expanding Eqs. (12.51) and (12.52)

to the lowest terms in qmin, we obtain the scaling laws for the excitationcross sections as

σB12s,1s '

A1

E

(

1 +B1

E

)

, (12.53)

σB12p,1s '

A2

E

(

lnE + C +B2

E

)

. (12.54)

The excitation cross sections at high energies are predominantly to thedipole-allowed 2p channel, and excitation to the dipole-forbidden 2s channelis negligible. The leading term scales as σ ∼ lnE/E. This is the basisfor the asymptotic behavior of the stopping power in particle transport(Eqs. (11.8) and (11.9), Section 11.2).

We note, however, that the logarithmic term lnE in Eq. (12.54) increasesslowly, and becomes dominant over the constant term C only at very highenergies. In some cases this happens at relativistic energies for heaviertargets. For practical purposes, the cross sections will be an admixture ofthe 1/E and lnE/E terms.

Figure 12.5 shows the Born cross sections as a function of collision speedv. Both the cross sections and the speed are shown in atomic units (Ta-ble A:A:8.1, p. A:402). In absolute units, they are a20 = 0.28 × 10−16 cm2

and v0 = 2.2 × 106 m/s, respectively. For protons, a speed of 1 a.u. corre-sponds to an energy of 25 keV/amu. The energy transfer for n = 1 → 2transitions is ε = Ef −Ei = −1

8+ 1

2= 3

8a.u. (10.2 eV).

Page 225: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.4. Inelastic scattering and atomic reactions 217

0 1 2 3 4 5v/v0

0

1

2

3

4

5

6

7

8

σ/a

2 0

2s

2p

2s+2p

Expt.

Figure 12.5: The 1s → 2s and 2p excitation cross sections of hydrogen byproton impact as a function of collision speed. The experimental data fromRef. [29] are normalized to theory at v = 2

√2 (proton energy of 200 keV).

The cross sections vanish in the low and high speed limits. At lowspeeds, the minimum momentum transfer is too large for the amount ofenergy transfer required. This mismatch reduces the cross sections. Athigh speeds, the interaction time is too short to be effective, and the crosssection decreases like 1/E.

The dipole-allowed 2p transition dominates over the dipole-forbidden 2stransition. The reason for the reduced cross section of the latter is analogousto the dipole-selection rules for time-dependent transitions (see Figure 8.8and discussions in Section 8.2). Expanding the exponential function

e−i~q·~r′ = 1− i~q · ~r′ − 1

2(~q · ~r′)2 + ..., (12.55)

we can show that for the inelastic form factor Ffi(q) in (12.44), the leading

nonzero term for 2p transitions is the dipole term ~q · ~r′, and for the 2s tran-sitions it is the quadruple term, (~q · ~r′)2. This suppresses the 2s transitionsbecause the T -matrix is peaked at small momentum transfers.

In Figure 12.5, we also compare theory with experimental data whichis for the total cross sections (2s + 2p). We only show the experimentaldata in the speed range up to ∼ 3 a.u. since at higher speeds the data iswell described by the Born approximation. The data is normalized to the

Page 226: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

218 Chapter 12. Classical and quantum scattering

theoretical value at v = 2√2, where Born results should be valid. The

Born approximation is in good agreement with the experiment down toabout v = 1.5 where the experimental data peaks. For lower speeds, itoverestimates the cross sections. It also gives the wrong peak position. Themagnitude of the experimental cross section near the peak is on the orderof πa20, a reasonable value because that is the cross-sectional area of theBohr orbit.

12.4.3 Ionization cross sections

In ionization the electron is ejected from the target atom as a result ofinteractions with the projectile. Like excitation, we need to calculate theinelastic form factor Ffi(q) to obtain ionization cross sections. Unlike ex-citation, however, the final state is a continuum state rather than a boundstate. This makes it a little more involved to compute the ionization formfactor Ffi(q).

We give the differential ionization cross section without proof as (in a.u.)[4]

dσB1k

dk=

210πZ2PZ

6Tk

v2(1− e−2πZT /k)

×∫ qmax

qmin

[q2 + 1

3(Z2

T + k2)]exp

[

−2ZT

ktan−1

(2ZT k

q2−k2+Z2

T

)]

[(q2 − k2 + Z2T )

2 + 4k2Z2T ]

3

dq

q.

(12.56)

In Eq. (12.56), ZP and ZT are the charges of the projectile and the tar-get (hydrogen-like), respectively, and k is the wave vector of the ejectedelectron. We see the typical structure of the integrand similar to that forexcitation. The energy transfer for ionization is given by

ε = ~2k2/2me + I, (12.57)

where I is the ionization potential, equal to 12Z2

T a.u. We note that theminimum momentum transfer as given by Eq. (12.50) depends on k.

The total ionization cross section can be obtained by integrating over k

σB1I =

∫ kmax

0

dσB1k

dkdk. (12.58)

Page 227: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.4. Inelastic scattering and atomic reactions 219

The upper limit kmax is constrained by energy conservation (12.33). Forheavy particle impact, it is practically infinity, though the differential crosssection falls off quickly after k ∼ ZT .

0 1 2 3 4 5 6 7 8v/v0

0

1

2

3

4

5

6

7

8

σ/a

2 0

Expt

Born

Free Born

Figure 12.6: The ionization cross sections of hydrogen by proton impactas a function of collision speed. The free Born results are obtained fromEq. (12.63) with ε = 2I. The experimental data is from Ref. [33].

Figure 12.6 displays the ionization cross sections of hydrogen by protonsas a function of collision speed. The Born results are obtained by performingthe double integration (12.58) numerically. They describe the experimentaldata quite well for speeds above 3. Differences begin to emerge at lowerspeeds, including the position and the magnitude of the maximum. Theexperimental peak position is at v ∼ 1.5 with a magnitude of ∼ 5, whilethe Born approximation is at ∼ 1 with a magnitude of ∼ 7.7.

Technically, we can compute the cross sections from Eqs. (12.56) and(12.58) by performing the double integration as discussed above. But con-ceptually Eq. (12.56) is not terribly illuminating. For instance, we cannoteasily extract the high energy limit. The complexity comes from the factthat the final state is a continuum Coulomb wave involving the conflu-ent hypergeometric function. However, we can understand the central ideawith a simplified picture in which the ejected electron is described as a freeparticle. The approximation will be valid only for high energies, but theconcept is revealed in plain sight. We shall call it the free-particle Bornapproximation (free Born in Figure 12.6), and describe it briefly below.

Page 228: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

220 Chapter 12. Classical and quantum scattering

Free-particle Born model for ionization

We assume the final state wave function of the ejected electron is a planewave, φk(r) = exp(i~k · ~r), representing a free particle. The final-state in-teraction with the target is ignored. The plane wave is simple but notorthogonal to the initial 1s state. This can introduce spurious contribu-tions for small k. To correct for this unphysical behavior, we enforce theorthogonality with the initial state via the Gram-Schmidt orthogonalizationprocedure [2],

φk(~r) = ei~k·~r − φ(k)φ1s(r), (12.59a)

φ(k) =

φ∗1s(r)e

i~k·~r d3r =8√πa

3/20

(1 + k2a20)2, (12.59b)

such that∫φ∗1s(r)φk(~r)d

3r = 0. Note that Eq. (12.59b) is simply the wavefunction in momentum space of the ground state.

We can obtain the inelastic form factor for ionization by substitutingEq. (12.59a) into (12.44),

Fk,1s =

φ∗k(~r)e

−i~q·~rφ1s(~r) d3r = φ(~k + ~q)− φ(k)F1s(q), (12.60)

where we have used the property φ(~k) = φ(−~k). The function F1s(q) is theelastic form factor given by

F1s(q) =

|φ1s(r)|2e−i~q·~r d3r =

(

1 +1

4q2a20

)−2

. (12.61)

Substitution of Eq. (12.60) into (12.44) yields the free-particle Born(FB) transition matrix element

T FBk,1s(q) = −

4πZP

q2

[

φ(~k + ~q)− φ(k)F1s(q)]

. (12.62)

We can interpret the two terms in the bracket of Eq. (12.62) as follows:the first term describes the direct ejection of the electron with the totalmomentum ~k+~q, and the second term describes the simultaneous scatteringfrom the electron cloud.9

9The overlap integral in Eq. (12.62) often occurs in a sudden approximation due toan abrupt change in a system. Sometimes this is called the shake-up (excitation) orshake-off (ionization) process in atomic reactions [25].

Page 229: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.5. Classical dynamics of atomic reactions 221

We can now perform the integrals over q in Eq. (12.40) and k analyticallyto obtain the total ionization cross section, σFB

i . The details are given inSection 12.D.2, and the result is

σFBi =

4πZ2Pa

20

~2v2

[

ln1 + β

β− (4 + 3β)2

12(1 + β)3

]

, β =a20q

2min

4. (12.63)

The variable qmin is the average minimum momentum transfer. A goodvalue to use is qmin = ε/~v = 2I/~v.

In the limit of large v, we immediately see from Eq. (12.63) that the freeBorn result scales as σFB

i ∝ − ln β/v2 ∼ lnE/E. It is the same scaling asthe leading term for dipole-allowed excitation cross section (12.54). In ion-ization, multiple partial waves make up the continuum wave function of welldefined momentum, including the p-wave. This makes ionization naturallydipole-allowed, hence the lnE/E scaling. A characteristic of dipole-allowedtransitions is that the inelastic form factor must scale as q for small mo-mentum transfers, evidenced by Eq. (12.55) for excitation, and Eq. (12.60)for ionization (Exercise E12.11). Consequently, the T -matrix behaves as1/q for q → 0, leading to

∫dq/q ∼ − ln qmin in the cross sections (12.40).

This is the origin of the logarithmic factor lnE.

Referring to Figure 12.6, the free Born results describe the qualitativetrend at higher speeds adequately. The absolute cross sections are off be-cause the magnitude depends on the choice of the average energy transferε. We chose ε = 2I. To bring it closer to the Born results, we would needto increase ε slightly.

12.5 Classical dynamics of atomic reactions

We have seen that the first order Born approximation provides a good de-scription of atomic reactions from intermediate to high collision speeds. Atlower speeds, the method breaks down, as higher order corrections becomeimportant. However, perturbative descriptions beyond the first order aredifficult to calculate in practice. A bigger challenge is that the perturbativeseries sometimes may not even converge. In these cases, non-perturbativemethods aiming for fully numerical solutions of the Schrodinger equationare often employed, such as the coupled channel methods discussed in Chap-ter 8.

Page 230: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

222 Chapter 12. Classical and quantum scattering

Alternatively, we can resort to classical descriptions of atomic reactions.Compared to the quantal treatment, classical methods are often simpler touse, and can provide useful insight to the collision dynamics. For intermedi-ate collision energies, classical methods can be very accurate in many casesunder appropriate conditions. We discuss one approach called the classicaltrajectory Monte Carlo (CTMC) method [28].

The CTMC method consists of three phases: selection of initial condi-tions, evolution of the collision system, and the determination of the out-come. We defer the discussion of the first and the third phases shortly, anddescribe the second phase first.

12.5.1 Dynamical evolution in CTMC

We consider a system ofN particles, electrons and nuclei, interacting via theCoulomb force. In CTMC, the collision system evolves classically accord-ing to Newtonian equations of motion, exactly like in molecular dynamics(Section A:11.4). Analogous to Eqs. (A:11.34) and (A:11.35), the equationsof motion for the N -body Coulombic system are

d~ridt

= ~vi,d~vidt

= ~ai, (12.64a)

~ai =Zi

mi

j 6=i

Zj~ri − ~rjr3ij

, (12.64b)

where Zi and mi are the charge and mass of particle i, respectively. Com-pared to molecular dynamics, we have made the pair-wise Coulomb forces~fij explicit in Eq. (12.64b).

N-body leapfrog with time transformation

In molecular dynamics, we used the standard leapfrog method. It workedwell there because the atoms exerted repulsive forces in close range. Forthe N -body problem we face in atomic reactions, some of the forces inEq. (12.64b) are attractive (ZiZj < 0). We would still like to use a sym-plectic method to solve the problem. The standard leapfrog is not efficientsince we would have to use very small step sizes to avoid the attractive sin-gularities in the Coulomb forces. However, we have shown that the leapfrog

Page 231: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.5. Classical dynamics of atomic reactions 223

method with a proper time transformation can handle the singularities ef-ficiently and robustly (Section A:4.3.3). We extend the method to N -bodysystems.

As before, we introduce the fictitious time s via the transformation

ds

dt= Ω(~r1, ~r2, ..., ~rN). (12.65)

Generalizing Eq. (A:4.28), we choose the transformation as

Ω =1

2

i

j 6=i

1

|~ri − ~rj |, (12.66)

which is a sum over all inter-particle distances. For a fixed fictitious timestep, a larger Ω translates into a smaller actual time step per Eq. (12.65).This ensures the automatic self-adjustment of the actual step size whenparticles are close.

To convert Eqs. (12.64a) and (12.64b) into a set of ODEs suitable forsymplectic integration, we introduce the auxiliary variable, W = Ω (gener-alized velocity), as before, such that

dW

dt=∑

i

∇iΩ · ~vi. (12.67)

In terms of the fictitious (transformed) time s, we can rewrite (12.64a)and (12.64b) as a set of equations mirroring (A:4.27a)–(A:4.27d),

d~rids

=1

W~vi,

dt

ds=

1

W, (12.68a)

d~vids

=1

Ω~ai,

dW

ds=

1

Ω

i

~Γi · ~vi, (12.68b)

where ~Γi is the gradient of Ω

~Γi = ∇iΩ =∑

j 6=i

~rj − ~rir3ij

. (12.69)

Page 232: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

224 Chapter 12. Classical and quantum scattering

Applying the standard leapfrog method to Eqs. (12.68a) and (12.68b),we obtain the N -body leapfrog algorithm with time transformation as

~ri, 12

= ~ri,0 + ~vi,0h

2W0

, t 12

= t0 +h

2W0

, (12.70a)

~vi,1 = ~vi,0 + ~ai, 12

h

Ω 1

2

, W1 = W0 +h

Ω 1

2

i

~Γi, 12

· ~vi,0 + ~vi,12

, (12.70b)

~ri,1 = ~ri, 12

+ ~vi,1h

2W1

, t1 = t 12

+h

2W1

. (12.70c)

Equations (12.70a)–(12.70c) are theN -body equivalents of (A:4.29a)–(A:4.29f),which we recover if N = 1. This algorithm is implemented below and willbe used in the CTMC method for the time evolution of the collision system.

Program listing 12.1: N -body leapfrog with time transformation(leapfrog ttN.py)

def leapfrog ttN( lfdiffeq , r , v, t , w, h):2 ””” N−body leapfrog with time transformation,

Omega= \sum 1/r ij ”””4 hw = h/(2.0∗w) # 1st step: calc r at h/2

r += v∗hw6 t += hw

8 a, Omega, Gamma = lfdiffeq(1, r, v, t) # calc. a, Ω,∇Ωhw = h/Omega # 2nd step,

10 v1 = v + a∗hww += np.sum(Gamma∗(v+v1))∗hw∗0.5 # calc.

∑Γ · v

12

hw = h/(2.0∗w) # 3rd step: calc r at h14 t += hw

r += v1∗hw16 return r, v1, t , w

The workings of Program 12.1 are nearly identical to the one-body codeProgram A:4.2. However, the N -body version skips calls to lfdiffeq() forvelocities, and appears to be simpler. This is because this version expectsthe lfdiffeq() function to return accelerations as well as Ω and its gradient∇Ω. In addition, it uses the NumPy function np.sum() to calculate thedot product ~Γi · (~vi,0+~vi,1) in Eq. (12.70b). As usual, we add Program 12.1to our collection in the ode.py library.

Page 233: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.5. Classical dynamics of atomic reactions 225

12.5.2 Initial conditions and outcome determination

We now discuss the first and the third phases of the CTMC method, namely,the selection of initial conditions and the determination of the outcome.

Initial conditions

Using the proton-hydrogen collision (three-body system) as the example,we must set the initial conditions for the incoming proton (projectile), theelectron, and the target nucleus. Working in the laboratory reference frame,we assume the projectile is fired far away from the target, with a well-definedinitial velocity ~v = vz. Its initial position can be taken to be ~rP = (b, 0,−R),where b is the impact parameter and R a0 is the initial z-distance to thetarget. The velocity and position of target nucleus may be set to zero (orclose to zero in the center-of-mass of the target atom).

For the electron, we should sample the initial conditions to mimic thequantum mechanical distributions of the initial state as closely as possible.The target atom has a well-defined energy E, and should be treated as amicrocanonical ensemble (see Section 12.E). We obtain the position andvelocity of the electron by Monte Carlo sampling under the constraint thatthe energy be conserved. This produces the same momentum distribution asthe quantum mechanical distribution (12.59b), |φ(k)|2, for the ground stateof hydrogen (Exercise E12.12). This is important for collision processesthat are generally more sensitive to the momentum profile as opposed tothe position profile.

Besides energy, the hydrogen atom has two other constants of motion,the total (L) and the z-component (Lz) of the angular momentum. Thesimplest approach is to disregard angular momentum distribution and as-sume the Bohr model of circular orbits. This corresponds to setting theangular momentum to its maximum value, or zero eccentricity (A:4.12). Asecond, more realistic approach is to sample the classical angular momen-tum as l− 1

2< L/~ ≤ l+ 1

2for a given quantum number l > 0. This works

well for higher angular momentum states where the semiclassical quantiza-tion (12.23) is valid. For s-states (l = 0), we switch to 0 < L/~ ≤ 1 toapproximate the angular momentum distribution. By sampling different Lvalues, we are effectively sampling orbits of different eccentricities.

Below, we summarize the Monte Carlo procedure for sampling the initialconditions, assuming the hydrogen atom is in the ground state.

Page 234: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

226 Chapter 12. Classical and quantum scattering

• Sample an impact parameter b2 uniformly distributed in [0, b2max], cor-responding to a uniform flux. Set the position of the projectile to(b, 0,−R). Skip the next two steps if circular orbits are assumed.

• Pick an eccentricity e2 uniformly distributed between 0 and 1, corre-sponding to a uniform distribution of L2 in the same range (A:4.10).Together with the semimajor axis a in Eq. (A:4.12), the Kepler orbitis uniquely determined.

• Determine the position and velocity of the electron (see Section 12.E).To do so, we sample the position in the Kepler orbit uniformly in time.We can do this by solving Kepler’s equation (12.121) to obtain theeccentric anomaly (12.120), and subsequently determine the positionand the velocity from Eqs. (12.123a) and (12.123b).

• Orient the plane of the Kepler orbit uniformly in space. This can bedone via a rotation through random Euler angles (12.125).

Figure 12.7 show the microcanonical position and velocity distributions(200 sampling points) of the electron in the ground-state hydrogen atoms.We can see faint outlines of Kepler orbits (Figure A:4.3) in the positiondistribution before the Euler rotation. The density is highest near theaphelion around (2, 0) where the electron is slowest. After the rotation, thedistribution becomes isotropic as required. For the velocity distribution, thepre-rotation data shows predominantly positive vy velocities. This is dueto the electron spending most of its time moving upward, first toward theaphelion on the lower part of the orbit and then away from the aphelion onthe upper part (see Figure A:4.3). Once the Euler rotation is applied, theasymmetry is resolved and we obtain a spherically symmetric distributionas expected.

Determination of the outcome

With the initial condition selected as described, we can proceed to integratethe equations of motion, follow the system until the collision is over, andanalyze the outcome, including excitation, ionization, or capture in whichthe electron jumps to a bound state of the projectile.

Let ET and EP be the relative energies of the electron in the targetand projectile frames, respectively, at the end of the collision. They can be

Page 235: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.5. Classical dynamics of atomic reactions 227

−2 −1 0 1 2x/a0

−2

−1

0

1

2y/a0

pre-rotation

−2 −1 0 1 2x/a0

−2

−1

0

1

2post-rotation

−2 −1 0 1 2vx /v0

−2

−1

0

1

2

v y/v

0

−2 −1 0 1 2vx /v0

−2

−1

0

1

2

Figure 12.7: The position (top) and velocity (bottom) distributions of theelectron in the hydrogen atom before and after Euler rotations.

calculated as

ET =1

2µT |~ve − ~vT |2 + VTe(~re − ~rT ), (12.71a)

EP =1

2µP |~ve − ~vP |2 + VPe(~re − ~rP ), (12.71b)

where ~ri and ~vi, i = e, T, P , are the respective positions and velocities ofthe electron, the target nucleus, and the projectile, and VTe and VPe arethe potential energies between the electron and the target and projectile,respectively. The µT and µP denote the reduced mass of the electron withthe target nucleus and the projectile.

Page 236: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

228 Chapter 12. Classical and quantum scattering

Table 12.2: Classification of reaction events.

ET negative positive positiveEP positive positive negative

Event excitation ionization capture

Table 12.2 summarizes the possible outcomes of a collision. If ET < 0and EP > 0, the electron is in a bound state of the target nucleus, and weclassify it as excitation (including elastic collision). If the signs are reversed,i.e., ET > 0 and EP < 0, the electron is captured by the projectile. Ifboth ET > 0 and EP > 0, the electron is in the continuum, and it is anionization event. In rare cases, particularly in slow collisions, it is possiblethat both energies are negative even though the projectile has receded farfrom the target. In such cases, either we have to continue integration untilthe outcome is clear, or mark the event as indeterminate.

Repeating the Monte Carlo simulation Nmc times, we can build up astatistical profile of these events. Let Nexc, Nion, and Ncap represent thenumber of events for excitation, ionization, and capture, respectively. Thecross sections can be calculated by multiplying the probabilities by the totalcross section πb2max as

σexc = πb2max

Nexc

Nmc, σion = πb2max

Nion

Nmc, σcap = πb2max

Ncap

Nmc. (12.72)

We can also obtain differential cross sections by subclassification, suchas by scattering angle, by specific n level excitation, or by ejected electronenergy, etc.

12.5.3 Results and discussion

We implement the core CTMC method in Program 12.2. All the resultsbelow are obtained from the program, sometimes with slight modifications.

In Figure 12.8 we show several typical trajectories in proton-hydrogencollisions, illustrating three-body dynamics leading to ionization, excitation,and capture. The projectile speed is 1.5, and comes out of the page. In theionization event, the electron receives a strong impulse from the projectile,making a turn around the target nucleus before flying off, unbound to boththe target and the projectile. For excitation, the electron orbit is perturbed

Page 237: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.5. Classical dynamics of atomic reactions 229

Figure 12.8: Sample trajectories for ionization, excitation, and capture (in-cluding a zoom-in view at upper-right) in proton-hydrogen collisions.

to such an extent that it roughly reverses the sense of rotation about thetarget nucleus. The energy and momentum imparted is not sufficient toknock it out.

For capture, the orbit is turned by the force from the projectile in justthe right way to enable the electron to jump to the moving projectile,falling into a bound state in the moving frame. This is typical of captureat intermediate speeds. Of all the events leading to capture, it is crucialthat the initial condition of the electron be such that velocity matchingis maximized. As we will see shortly, capture is very sensitive to collisionspeeds, falling off rapidly with increasing speeds.

We also see that the projectile travels in a nearly straightline trajectory,and the target nucleus stays at the origin to a good degree of approximation.This is due to their large mass ratio to the electron mass. If we are notinterested in the slight deflection of the projectile on the order of me/mP ∼1/2000, we can simply assume that it moves in a straight line with constantvelocity. This can result in a substantial gain in the speed of computation,

Page 238: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

230 Chapter 12. Classical and quantum scattering

as we only need to track the motion of the electron in a time-dependentfield. Coupled with the self-adjustment of step size in the leapfrog method(Section 12.F), we can run quite large jobs even on a personal computer. Weleave further exploration to Project P12.10. The results presented beloware obtained with full three-body dynamics. However, little difference isexpected from the straightline approximation.

Excitation and ionization cross sections

With a sufficient number of Monte Carlo runs, we can obtain the crosssections from Eq. (12.72) (Project P12.8). Table 12.3 lists the CTMC crosssections for excitation, ionization, and capture as a function of collisionspeeds near the ionization maximum. Previous Born cross sections are alsotabulated for comparison. The cross sections over a wider speed range areshown in Figure 12.9.

Table 12.3: CTMC cross sections for excitation (n = 2), ionization andcapture of hydrogen by proton impact. The cross sections and collisionspeed are in a.u. (see Table A:8.1), a20 = 2.8×10−17 cm2 and v0 = 2.19×106m/s, respectively.

CTMC BornSpeed Excitation Ionization Capture Excitation Ionization0.50 0.569 0.030 10.7 6.04 3.010.60 0.924 0.098 12.1 6.93 4.680.72 0.746 0.275 11.8 7.31 6.290.86 1.02 0.903 11.8 7.19 7.381.04 1.47 1.96 10.1 6.66 7.731.15 2.03 3.32 7.37 6.27 7.581.24 2.16 4.30 5.57 5.95 7.331.38 2.53 5.09 3.59 5.46 6.831.49 2.51 5.44 2.21 5.09 6.401.66 2.80 5.38 1.30 4.59 5.741.79 2.85 5.01 0.746 4.24 5.261.99 2.99 4.46 0.357 3.77 4.602.38 2.54 3.42 0.118 3.05 3.582.86 1.93 2.62 0.020 2.41 2.69

Page 239: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.5. Classical dynamics of atomic reactions 231

0 1 2 3 4 5v/v0

0

1

2

3

4

5

6

7

8

σe/a2 0

CTMC

CTMC-Bohr

Born

Expt

0 1 2 3 4 5 6 7 8v/v0

0

1

2

3

4

5

6

7

8

σi/a2 0

Expt

CTMC

CTMC-Bohr

Born

Figure 12.9: Excitation to n = 2 (left) and ionization (right) cross sectionsof hydrogen by proton impact as a function of collision speed. The sourcesof experimental data are the same as in Figure 12.5 and Figure 12.6.

For ionization, the CTMC and the Born cross sections are in good agree-ment for speeds v ≥ 2. However, for excitation, the CTMC results consis-tently lie below the Born results up to v ∼ 4 (Figure 12.9). What is thecause? The discrepancy in the latter case illustrates certain ambiguitiesbetween classical and quantum systems.

Unlike ionization where E > 0 is clearly distinguishable from boundstates, excitation to specific quantum levels is less certain in classical me-chanics. Not only are bound states not quantized in classical mechanics,there is also no lower bound on the bound state energy. In effect, afterthe collision, the electron can be left in states below the ground state itstarted from before the collision, in a process we may call de-excitation.The question then is: how to classify the final-state distribution in thesecases? There is no unique scheme to do so, except that we require theBohr-Sommerfeld correspondence principle be satisfied in the limit of largequantum numbers (see Eq. (A:8.59) in quantum revival).

Let nc = ZT/√−2ET be the “classical” principal number, where ZT is

the nuclear charge and ET is defined by Eq. (12.71a). The ground statecorresponds to nc = 1 according to Eq. (12.118). For a given principalquantum number n, we adopt a classification scheme as

[

(n− 1)(n− 1

2)n

]1/3

≤ nc <

[

n(n+1

2)(n+ 1)

]1/3

. (12.73)

Page 240: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

232 Chapter 12. Classical and quantum scattering

Any value of nc satisfying Eq. (12.73) is assigned to the level n. Equa-tion (12.73) reduces to n − 1

2≤ nc < n + 1

2for n 1. The CTMC cross

sections for excitation to n = 2 are determined according to Eq. (12.73).Compared to the experimental data (Figure 12.9), the CTMC results

for ionization agree with the data very well from the maximum positionv > 1.5. For excitation, the agreement is satisfactory for v > 2, despitethe uncertainties in level determination (12.73). Figure 12.9 shows that theCTMC method can reliably predict cross sections from intermediate speedsand up v & 1. Its range of validity is much extended relative to the Bornapproximation.

We also show in Figure 12.9 the CTMC-Bohr results in which the elec-tron is initially in a simple, circular Bohr orbit (randomized in orientation),rather than a true microcanonical distribution. The momentum profile inthe Bohr model is a sharp, delta distribution. It serves as a check on thedependence of the initial conditions. This produces overall greater cross sec-tions, up to 30%, in both excitation and ionization for speeds up to v ∼ 3.For ionization, the difference is most prominent near the maximum v ∼ 1.5,although the peak positions remain unshifted. The enhanced cross sectionis a result of interesting balance between momentum and energy transfers.Further studies are left to Project P12.11. The differences between themicrocanonical ensemble and the Bohr model vanish at larger speeds.

At low speeds, both the excitation and ionization cross sections de-crease with decreasing speed, but the rate is much more rapid for ioniza-tion. On closer examination of electron trajectories leading to ionization(Figure 12.8), we find longer interaction times result in larger momentumtransfers than necessary for ionization. Therefore, ionization events arestrongly suppressed due to the mismatch. Instead, capture becomes thepredominant process.

Capture cross sections

Capture reactions may be represented as

H+ +H −→ H+ H+. (12.74)

Capture is qualitatively different than excitation or ionization in that thecenter of the final state has switched to the projectile from the target nu-cleus, the center of the initial state. In the entrance channel, the perturba-tion is the projectile-electron interaction; and in the exit channel, it is the

Page 241: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.5. Classical dynamics of atomic reactions 233

target-electron interaction. The prior and post forms of the perturbation,as they are called, are different in formal scattering theory [4]. For thisreason, a perturbative approach like the Born approximation is ambiguousfor capture. Even if we carried out a first-order calculation, it is expected tobe valid from intermediate to high speeds based on our study for excitationand ionization. Therein lies another problem we will discuss shortly: thata first-order theory is actually inadequate at high speeds for capture.

0.0 0.5 1.0 1.5 2.0 2.5 3.0v/v0

10-2

10-1

100

101

102

σc/a

2 0

CTMC

CTMC-Bohr

Expt

Figure 12.10: Capture cross section for electron capture in proton-hydrogencollisions. The experimental data is from Ref. [11].

We have no such ambiguity within the CTMC framework. The deter-mination of the final state from Table 12.2 is no more different for capturethan for ionization or excitation. Figure 12.10 displays the capture crosssections from two initial conditions, the microcanonical ensemble and theBohr model.

At low speeds, the capture cross section reaches a plateau, in contrastto the sharp decline for excitation or ionization. Roughly, this may beunderstood in terms of the potential barrier between the projectile and thetarget. In slow collisions, we can visualize electron capture as a process ofthe electron flowing over the barrier, i.e., over the saddle point similar tothe Lagrange point L2 in Figure A:4.17, to the projectile. The shape ofthe barrier is independent of the collision speed at v ∼ 0, yielding a nearlyconstant cross section.

Page 242: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

234 Chapter 12. Classical and quantum scattering

The experimental data shows an increasing cross section for decreasingspeed, and the difference is about one order-of-magnitude larger than theCTMC result. The large difference indicates that the classical over-the-barrier explanation above is not sufficient. Tunneling through the barrieris an important factor at low speed, which is missing in classical dynamics.Full quantal or a hybrid calculation taking into account tunneling effects isnecessary to properly describe capture cross sections at low collision speeds.

Above intermediate speeds v & 1, the CTMC theory predicts a rapidfall-off of the cross sections. From v = 1−3, the cross section drops by aboutthree orders of magnitude, roughly at a rate of 1.5 orders of magnitude peratomic unit of speed. There is good agreement with the experiment in thisspeed range.

However, starting from v & 1.7, the CTMC-Bohr model severely under-estimates the capture cross section. For capture above the orbital speed,direct velocity matching is the most efficient mechanism, as stated earlier(Figure 12.8, capture). If the velocity of the electron is comparable to theprojectile velocity, the electron can most easily “glide” into the projectileframe, increasing the probability for capture. In the microcanonical ensem-ble, there is a high-velocity tail in the momentum distribution (ExerciseE12.12) that can provide the required component for direct velocity match-ing. But this is missing in the Bohr model, causing the cross section to falloff precipitously.

Thomas two-step mechanism

What happens at even higher collision speeds? Direct velocity matchingbecomes ineffective because the high momentum component becomes verysmall in the microcanonical ensemble. Thomas proposed a wholly classical,two-step mechanism for capture [36], illustrated in Figure 12.11.

In the first step, the electron is scattered to 60 in a hard, binary col-lision with the projectile, accelerating it to the speed of the projectile.Subsequently, the electron scatters at the target nucleus by another 60 toturn the velocity in the same direction of the projectile. Indirect velocitymatching is satisfied, and the two-step mechanism is expected to dominatethe cross section. Pure kinematic consideration, i.e., energy and momentumconservation, shows that the speed ve of the electron and the scattering an-gle θP of the projectile after the first binary collision depend on the electron

Page 243: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.5. Classical dynamics of atomic reactions 235

~v60

~v60

~ve

P

T

e

Figure 12.11: The Thomas two-step capture mechanism. The first collisionwith the projectile accelerates the electron to projectile’s speed, and thesecond collision at the target nucleus deflects it to the forward direction.

scattering angle θe as (Exercise E12.14)

ve ' 2v cos θe, sin θP 'me

mPsin 2θe, (12.75)

where the electron-projectile mass ratio is assumed to be small, me/mP 1.

To match the speed v, the electron scattering angle must be θe = 60.The projectile scattering angle is θP = 0.47 mrad, assuming mP to be theproton mass. This is known as the Thomas angle. It is the signature forthe Thomas capture mechanism.

Indeed, quantum mechanical calculations show that the first-order Bornapproximation is totally inadequate at high speeds. Instead, a second-order Born approximation corresponding to the Thomas two-step mech-anism should be predominant.10 If direct velocity matching in capture isviewed as a two-body process where the target nucleus is a passive spectator,then indirect velocity matching via the Thomas mechanism is a three-bodyprocess requiring active participation of the projectile, the electron, and thetarget nucleus.

However, since Thomas’ postulate in 1927, no substantial evidence ofthe dominance of indirect velocity matching had been observed for capturefrom ground state atoms, casting a mystery on the two-step mechanism. Itturned out that the ground state (1s) provides sufficient amount of high

10In terms of Green’s function (12.4), the second-order Born amplitude may be writtenas f2 = VTGVP , showing the first interaction VP with the projectile followed by thesecond interaction VT with the target nucleus.

Page 244: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

236 Chapter 12. Classical and quantum scattering

velocity component that direct velocity matching is predominant for speedswell into the relativistic regime.

Only in the last two decades has evidence emerged that not only canthe Thomas mechanism dominate, it can occur even at intermediate speeds[38]. Recent and ongoing studies reveal that the answer lies in capture fromhighly excited states (Rydberg states, see Section A:9.5, Figure A:9.13 andRef. [4]), which behave like semiclassical objects. The high-momentum com-ponent in Rydberg atoms are strongly suppressed, making indirect velocitymatching the dominant pathway to capture at speeds slightly larger (a fewtimes) than the orbital speed (Project P12.12).

Chapter summary

We discussed approximation methods for potential and inelastic scatteringin both classical and quantum mechanics. We presented simulation methodsfor obtaining the cross sections either exactly in full numerical treatmentor approximately via perturbative approaches.

We briefly examined classical orbiting. Then we turned our attentionto the calculation of scattering length at low energies. At higher energiesor large partial waves, the Born approximation can yield accurate results.The simplicity of the WKB approximation, and its connection to the de-flection function, makes it an attractive alternative for strong potentials orat intermediate energies.

For inelastic scattering including reactions such as excitation or ioniza-tion, we can use the Born approximation, which provides useful analysisof collision dynamics in terms of momentum and energy transfers. We cancompute the cross sections accurately from intermediate to high energies.Below the intermediate energies, the Born approximation becomes inaccu-rate. However, the classical trajectory Monte Carlo (CTMC) method workswell in this region. We can use it to calculate accurate cross sections in-cluding capture with ease. Furthermore, by examining the trajectories viareal-time VPython animation, we can gain complementary insight to thecollision dynamics such as the Thomas mechanism for capture.

We have extended the one-body, self-adjusting leapfrog method to N -body systems with a time transformation which efficiently avoids the sin-gularities in the Coulomb potential. We also described a similar time-

Page 245: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.6. Exercises and Projects 237

dependent leapfrog method, and applied it to the straightline CTMC ap-proximation for optimized code execution.

12.6 Exercises and Projects

Exercises

E12.1 (a) Compute the effective potential of scattering from the Yukawapotential (A:12.13). Choose Z = −2, a = 1, and two energies E =0.07 and 1. For each energy, plot the effective potential for the impactparameter b ∈ [10−2, 4], divided into roughly half a dozen intervals.

For E ∼ 0.07, fine tune the impact parameter so that the top ofthe potential barrier is very close to E. What is the approximate bvalue? What is the value when the potential flattens out, i.e., thebarrier and the well disappear?

(b)∗ For the same effective potential as above, find the radii of thebottom of the well and the top of the barrier at a given E and b.Write a program to output these values. Optionally, add a functionto the program that can find the value of b such that the top of thebarrier is roughly equal to the energy. Give the value for E = 0.07.

E12.2 (a) Apply G−1 = ∇2 + k2 to both sides of Eq. (12.3) to show thatthe Schrodinger equation (12.2) is recovered.

(b) Let G(~r, ~r′) be the Green’s function (12.5). By explicit differen-tiation, show it is a solution to Eq. (12.4),

(∇2 + k2)G(~r, ~r′) = δ(~r − ~r′).

E12.3 (a) For the Yukawa potential (A:12.13), calculate the scattering am-plitude, Eq. (12.12), in the first Born approximation. Show that thedifferential and total cross sections are given by Eqs. (12.13) and(12.14), respectively.

(b) Repeat the same calculations for a potential well (Project A:P12.3),V (r) = −V0 for r ≤ a and zero for r > a.

E12.4 (a) Let fB2 = fB1 + f2 be the second Born scattering amplitude,where fB1 is given by Eq. (12.11). Write out f2 by substitutingEq. (12.15) into (12.10).

Page 246: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

238 Chapter 12. Classical and quantum scattering

(b)∗ For the Yukawa potential (A:12.13), evaluate f2. The job canbe made easier by transforming the integrals to momentum space.

E12.5 (a) Writing the partial wave cross section (A:12.32) explicitly asσ(θ) = f(θ)f ∗(θ) and using the orthogonality property of the Leg-endre polynomials

Pl(cos θ)Pl′(cos θ) sin θ dθ =2

2l + 1δll′,

show that the total cross section is given by Eq. (A:12.33).

(b) Prove the optical theorem

σt =4π

kIm f(0).

It states that the total cross section is proportional to the forwardscattering amplitude.

E12.6 (a) Using the recurrence relation (A:12.57), and j0, j1 from Eq. (A:12.56),obtain the following

j2(x) =

(3

x3− 1

x

)

sin x− 3

x2cos x,

n2(x) =

(

− 3

x3+

1

x

)

cos x− 3

x2sin x.

(b) Apply the von Neumann analysis (Chapter A:6, Section A:6.5.4)to show that the recurrence relation (A:12.57) is unstable for jl(x)in the upward direction but stable in the downward direction. Theopposite is true for nl(x).

E12.7 Calculate the WKB phase shift for the Coulomb potential, and showthat it contains a logarithmic term ln r. This term is one reasonthat the 1/r potential is considered long-ranged, and particles neverbecome free even as r →∞.

Substituting your result into Eq. (12.28), show that it leads to thesame deflection function (A:12.52).

Page 247: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.6. Exercises and Projects 239

E12.8 Using j1 from Eq. (A:12.56), perform the following integration toshow that

I =

∫ ∞

0

e−αrj1(βr)r3 dr =

8αβ

(α2 + β2)3.

Express sin(βr) and cos(βr) in j1(βr) in terms of exp(±iβr) andintegrate as in Eq. (12.103).

E12.9 In the limit of high energy E 1, expand Eqs. (12.51) and (12.52)to first order in q2min, and determine the coefficients A1,2, B1,2, andC.

E12.10 Show explicitly that, for the inelastic form factor Ffi(q) in Eq. (12.44),the dipole term in Eq. (12.55) is zero for 1s → 2s transitions, butnonzero for 1s → 2p transitions. In evaluating the integral (12.44),choose ~q as the z direction.

E12.11 (a) Show that the integral from Eq. (12.116) is

∫ ∞

qmin

F 21s(q)

dq

q3=

1

2q2min

− a202

[

ln1 + β

β− (4 + 3β)2

12(1 + β)3

]

,

where β = a20q2min/4. First integrate by part once, then use the

results from Project P12.6, and finally simplify the expression bycombining like powers of 1/(1 + β)j.

(b) Expand the free Born inelastic form factor (12.61), and showthat

F1s(q) = c1q + c2q2 + ...

Obtain the coefficients c1 and c2.

E12.12 Consider the microcanonical ensemble of the hydrogen atom with afixed energy E. The phase space distribution is given by

ρ(~r, ~p) = δ(E −H),

where H = p2/2m−ZT/r is the Hamiltonian of the hydrogen atom.Assume the atom is in the ground state.

(a) Integrating over the coordinates, show that the classical momen-tum distribution is given by ρ(p) = |φ(p)|2, with φ(p) defined byEq. (12.59b).

Page 248: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

240 Chapter 12. Classical and quantum scattering

(b) Integrating over the momentum, obtain the classical positiondistribution ρ(r). Plot ρ(r) and the quantum probability density|R10(r)|2. Compare and discuss the results.

E12.13 (a) Combining Eqs. (A:4.13) and (A:4.38), verify Eq. (12.122).

(b) Using Eqs. (A:4.6), (A:4.12), (A:4.13), and (12.122), show thatthe position and velocity are given by Eqs. (12.123a)–(12.123b).

E12.14 Consider an elastic collision of a projectile with speed v and massmP

impinging on an electron of mass me at rest. Assuming the electronis scattered to an angle θe relative to the incident direction, calculatethe final velocities ve of the electron and vP of the projectile, and thescattering angle θP of the projectile. If the mass ratio me/mP ∼ 0,derive the Thomas scattering angles to the lowest order in the ratio,Eq. (12.75).

Projects

P12.1 (a) Write a program to investigate particle trajectories and orbitingin scattering from the Yukawa potential (A:12.13). Since the po-tential is singular, it is essential to use a stable ODE solver for theequations of motion. The leapfrog method with time transformation(Program A:4.2, or A:4.3) works well in this case.

Integrate the equations of motion in the x-y plane (force ~F = −dVdrr).

For the initial condition, set (x, y) = (−rmax, b), and (vx, vy) = (v, 0),where v is given by Eq. (A:12.1). Animate the motion and plot thetrajectories for several impact parameters. Test the program usingthe same parameters as in Figure 12.1, Z = 2, a = 1, E = 0.1, andrmax = 8.

(b) Explore orbiting in the parameter space around E ∼ 0.05 − 0.1and b ∼ 3−3.5 (see Exercise E12.1). The goal is to find a combinationso the top of the barrier in the effective potential is as close to E aspossible. Figure 12.12 shows such a case in which the particle orbitsmore than nine times before leaving the interaction region.

You should have no trouble finding two or more orbiting revolutions.As a challenge, find an E and b where the particle orbits ten timesor more.

Page 249: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.6. Exercises and Projects 241

Figure 12.12: Orbiting in scattering from the Yukawa potential.

P12.2 In this project, we conduct virtual scattering experiments by numer-ical simulation. We assume a beam of incident particles distributedover the impact parameter range [bmin, bmax] with flux (12.86). Wefollow each particle from the beginning to the end as described be-tween Eqs. (12.86) and (12.90), and build up the cross section fromindividual outcomes.

First, write a program that integrates the equations of motion. Ifone is not already written from Project P12.1, make one based onProgram A:4.3.

Let N be the number of particles. Divide the range [bmin, bmax] intoN equidistant points, bi, i = 1, 2, ..., N . Each bi value is proportionalto the number of particles entering the ring between bi and bi+1,according to Eq. (12.90).

Let n be the number of intervals of the scattering angle θ ∈ [0, π],i.e., θj = j∆θ, ∆θ = π/(n+1), j = 0, 1, ..., n. Create an array Nθ oflength n, such that Nθ[j] is the number of particles scattered into θjand θj+1.

The algorithm of the numerical scattering experiment is as follows:

• For particle i at bi, integrate the equations of motion with theinitial conditions (x, y) = (−rmax, bi) and (vx, vy) = (

√2E, 0).

Stop the integration when the particle has left the interactionregion, e.g., r > 1.5rmax.

• Compute the scattering angle θi = cos−1(vx/v) where vx and vare the x velocity and speed of the particle, respectively, when

Page 250: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

242 Chapter 12. Classical and quantum scattering

the integration is terminated. Find the index j = int[θj/∆θ],and add bi to the counter Nθ[j].

• Repeat above steps until all N particles have been processed.Calculate the cross section from Eq. (12.89) by

σ(θj+ 1

2

) =b2max

2 sin(θj+ 1

2

)∆θ

Nθ[j]

Nt, (12.76)

where θj+ 1

2

= θj +12∆θ, and Nt =

i bi.

Carry out numerical experiments for the Yukawa potential (A:12.13).Use the following parameters Z = 2, a = 1, and a large rmax = 8where the forces (−dV

drr) are negligible. The reasonable values of

other parameters include N = 104−105, n = 100−200, bmin = 10−2,and bmax = 5 which depends on energy. Vary N and n to producesmooth curves.

Plot the cross sections for E = 0.1 and 1. Discuss and comparethe results with other methods (Figure A:12.8, Project A:P12.4).Optionally, apply this method to calculate cross sections for otherpotentials such as the plum-pudding potential.

P12.3 Calculate the scattering length a0 in Eq. (12.17) for the Yukawapotential with Z = 2, and a variable a as described below.

(a) Write a standalone module that calculates the phase shift δ0 atsmall k values, say from k = [kmin, kmax], incremented by a constantmultiplier, e.g., 1.05. Pass the necessary variables such as E and Lto the Schrodinger equation (f() in Program A:12.2) by declaringthem global variables in the module. Use kmin = 10−3, and kmax = 1.As in previous projects, ensure that the phase shift is continuous andtends to zero at large k. Increase (or decrease) kmax if the small-klimit δl(k → 0) is uncertain (certain).

Use Program A:9.5 to check the number of bound states for a givenset of Z and a parameters in the Yukawa potential. This helps usto ascertain the limiting value of δl(k → 0) from Levinson’s theorem(A:12.23).

Iterate through a, starting from a = 0.01 to 2, increment by a con-stant factor, e.g., 1.02. For each a, obtain the phase shift from the

Page 251: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.6. Exercises and Projects 243

module created above, and compute the scattering length at the low-est k, i.e.,

a0 = −sin δ0(kmin)

kmin.

Check that a0 has converged by computing the ratio at the nextseveral higher values of k.

Plot the scattering length a0 as a function of a similar to Figure 12.3.

(b) For the potentials listed in Project A:P12.7, predict their scat-tering lengths, and list them in the order from the greatest to thesmallest. Calculate the scattering lengths for the potentials at a = 1and a = 2. Discuss the results. Are they as you predicted? Explain.

P12.4 Study phase shifts by the Born and the WKB approximations andcompare with the exact results. We assume the latter is readily avail-able from previous projects or from Program A:12.2. The nominalpotential is the Yukawa potential.

(a) Write two standalone functions, one for the Born phase shift(12.21) and one for the WKB phase shift (12.29), that return thephase shifts at a given energy E for l = [0, lmax]. Define the integrandfor each function, and evaluate the integrals numerically. The upperlimit rmax should be large compared to the range of the potential,as well as to l, or more precisely krmax/l 1. The value of lmax =20 should be adequate for low and intermediate energies. For highenergies, increase lmax appropriately.

Calculate the phase shifts from the Yukawa potential with the sameparameters as in Figure 12.4. Watch for numerical convergence forlarger l. If you are using the fixed-order Gaussian integration routinegauss() (Section A:8.A.3) which has no automatic accuracy check,make sure to divide the interval into n subintervals. Check conver-gence of results by varying n and rmax. Alternatively, use the SciPyfunction quad mentioned after Program A:8.5 which does have au-tomatic error control. Regardless of what routines are used, alwayscheck the accuracy of the results by varying the relevant parameters.This includes the matching radius and the step size in the exactresults from codes such as Program A:12.2.

Optionally, carry out the WKB integral (12.29) in the u = 1/r space(see Program A:12.1). You should first combine the integrals for Φ

Page 252: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

244 Chapter 12. Classical and quantum scattering

(12.26) and Φ0 (12.91) into a form

δWKBl =

∫ lc/k

rmin

[· · · ]dr +∫ ∞

lc/k

(· · · )dr,

then compute the second integral in u space. Do a few numerical ex-periments. Describe the advantages and drawbacks of this approachcompared to straight integration in r space.

(b) Plot the relative error in the phase shifts of the Born and theWKB approximations as a function of l at E = 2 and 10. Computethe differential cross sections (DCS) from these phase shifts, and plotboth DCS and the relative error. All plots should be on a semilogscale. How do the relative errors in phase shifts and in DCS correlatewith each other?

(c) Calculate the total cross sections as a function of energy fromE = 1 to 50. Plot all results, Born, WKB, and the exact, on thesame graph, as well as the relative error of each approximation. Ofthe phase shifts, differential and total cross sections, what is mostsensitive test for the approximations?

(d)∗ Find a potential you are interested in, or pick one from ProjectA:P12.7, possibly reversing the sign for a repulsive potential, andrepeat the investigation. Describe your findings and observations.

P12.5 Study elastic and excitation cross sections differential in the projec-tile scattering angle.

(a) Assume proton impact at v = 2. Evaluate the differential crosssections from Eq. (12.35) for elastic scattering. The T -matrix can becomputed from Eq. (12.44) by setting i = f = 1s, with the elasticform factor given by Eq. (12.61). First plot the differential crosssections as a function of the scaled momentum transfer q/qmin from1 − 10. Are the results well represented on the linear y-scale? Howabout on a logarithmic y-scale?

Next, convert the x-scale from q/qmin to the projectile scatteringangle θP , and replot the cross sections. What is the typical scale forθP ? Explain.

Finally, obtain the total scattering cross sections, and plot the resultsas a function of the collision speeds v = 1− 5. Compare your resultswith excitation cross sections in Figure 12.5.

Page 253: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.6. Exercises and Projects 245

(b) Calculate the differential cross sections for excitation to 2s and2p (including m substates) at v = 2. Plot the results on a propery-scale per discussion above to clearly show the different behaviors ofthe cross sections at small and large momentum transfers. Explainthe differences.

(c) If the projectile is an electron rather than a proton, what differ-ences may be expected in the cross sections? The typical scatteringangles? Sketch the cross sections and write the similarities to anddifferences from proton impact.

Repeat the calculations for elastic scattering and excitation by elec-tron impact. Plot the differential cross sections as a function ofthe momentum transfer, and of the electron scattering angle. Dis-cuss and compare your predictions with the corresponding results forproton impact.

P12.6 Evaluate and plot 1s to 2s and 2p excitation cross sections in theBorn approximation.

(a) Calculate the indefinite integral

In =

∫1

(a2 + b2x2)ndx

x=

1

2a2n

[

lnt

1 + t+

n−1∑

j=1

1

j(1 + t)j

]

,

where t = b2x2/a2. First, define Jn = 1/t(1 + t)n, show that

Jn = Jn−1 −1

(1 + t)n= ... =

1

t−

n∑

j=1

1

(1 + t)j,

and then integrate the decomposition. Alternatively, you can provethe decomposition as a geometric series with the ratio r = 1/(1+ t).

(b) Use the results above, derive the expressions (12.51) and (12.52),assuming qmax = ∞ for heavy projectile impact. Evaluate and plotthe cross sections as those shown in Figure 12.5.

(c) Obtain excitation cross sections for electron impact. Here, theupper limit qmax cannot be set to infinity at intermediate energies,and kf must be determined from Eq. (12.48). Derive the cross sec-tions similar to Eqs. (12.51) and (12.52) but expressed in terms of

Page 254: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

246 Chapter 12. Classical and quantum scattering

qmin and qmax. What is the threshold for excitation? Evaluate andplot the cross sections and compare to those by proton impact. Dis-cuss the differences between the two.

P12.7 (a) Calculate the energy distribution of the ejected electron in ion-ization of hydrogen by proton impact. Write a program to performthe integral (12.56). Check the upper limit over q such that the in-tegral converges to within 10−3 or better. Plot the differential crosssection as a function of E = 1

2k2. Note that

dσB1E

dE=

1

k

dσB1k

dk.

(b) Obtain the total ionization cross section by performing the doubleintegration (12.56) with (12.58) numerically. Plot the results andcompare with Figure 12.6.

(c) Calculate and plot the free Born results from Eq. (12.63). Varythe average energy transfer ε from I to 5I, and compare cross sectionsat higher speeds (v > 3) with the Born approximation. What ε givesthe best agreement? What part of the cross section is most sensitiveto ε? Why?

P12.8 Build a complete CTMC program that classifies the outcomes andcalculates the cross sections.

Add two functions to Program 12.2, one determining the event ac-cording to the criteria listed in Table 12.2, and another one comput-ing the cross sections from Eq. (12.72). For easy reference, let usname these two functions as classify() and xsection(), respec-tively.

For classify(), it should take the positions and velocities as inputat the end of a trial, classify the event, and return it as a uniqueidentifier, say 1, 2, 3 for excitation, ionization, and capture, respec-tively. In the rare case that the event is indeterminate, return 0, forexample. Optionally return the classical principal number nc fromEq. (12.73) for excitation reactions.

For xsection(), it should accept the collision speed v and the num-ber of Monte Carlo trials Nmc as input, and return the cross sections

Page 255: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.6. Exercises and Projects 247

for various events. Construct two nested loops. The outer loop iter-ates through the trials. For each trial, sample b, initialize the system,and enter the inner loop to integrate the system (the while loop inProgram 12.2). After exiting the inner loop, call classify() to de-termine the event, and increment the counter for that event accord-ingly. Finally, calculate the cross sections according to Eq. (12.72).

In the main code, turn off animation, and call xsection() for agiven speed and number of trials. Test your program by calculatingthe reaction cross sections as a function of collision speeds. Comparewith the values listed in Table 12.3, and these shown in Figure 12.9and Figure 12.10. First try Nmc = 2000, bmax = 5, and gauge therunning times. Improve statistics by increasing the Monte Carlotrials to a larger number, say Nmc = 20000 at each speed (and go tohave a cup of tea or two while waiting for the program to finish).

Check convergence of cross sections by varying bmax in the neighbor-hood of 5, say up or down by a few atomic units.

Optionally, since speed is paramount for large Nmc trials, considerspeed-boosting nbody() using F2Py as in Program 11.3.

P12.9 Study the effects of the projectile’s charge and mass on collision crosssections. Use the program developed in Project P12.8 for investiga-tion in this project.

(a) First, consider the collisions of antiprotons on hydrogen. Onlyexcitation and ionization are possible in this case. Calculate the crosssections as a function of collision speeds from 1 to 5 with reasonableincrements to show smooth curves when plotted.

Compare ionization cross sections with the proton results of CTMCand the Born approximation (Figure 12.6, Figure 12.9 and Table 12.3).The Born results are identical for +Z and −Z projectiles. The dif-ference between the ±Z cross sections is known as the Barkas effect.

According to CTMC results, how large is the Barkas effect? Whichis larger, protons or antiprotons? Explain, with the aid of a fewtrajectories at the cross section maximum.

Page 256: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

248 Chapter 12. Classical and quantum scattering

(b) Replace the antiprotons with electrons. Predict whether the elec-trons will have a higher or lower ionization cross sections comparedto protons for a given speed. Explain why you think so.

Run the program, generate and plot the ionization cross sections overthe same speed range. Discuss your results. Optionally, search liter-ature to find experimental electron-impact ionization data, comparewith CTMC and Born results.

(c) Repeat the calculation for He2+ impact. Compare with the ap-propriately scaled Born results, and with experimental measurementsfor ionization. Discuss the validities of CTMC and Born approxima-tions.

(d)∗ Finally, consider the collisions of positrons (same mass but oppo-site charge to electrons) with hydrogen atoms. Modify the programto use the proper reduced mass in the classification function. Showseveral typical trajectories for various reactions. What is differentbetween proton and positron impact?

P12.10 For heavy projectiles, they travel in nearly straightline trajectories.We can use this fact to optimize the speed of our program. Imple-ment the straightline approximation in the CTMC method.

First, profile Program 12.2 (turn off animation) or the code developedin Project P12.8 for a number of trials. Note where the bottleneckis.

Write a time-dependent leapfrog method with self-adjusting step sizeas outlined in Section 12.F. Test that it works correctly by compar-ing the trajectories with the standard CTMC code using the time-independent leapfrog method. Use the same initial conditions (eitherset them explicitly, or use the same random seed to generate the sameinitial conditions).

Because the time-dependent system is no longer an energy-conservingHamiltonian system, the electron could fall into a very deep boundstate of the target or the projectile, requiring ever smaller step size.To prevent the system from being stuck in such situations, build acheck into your code such that when W or Ω is larger than somevalue, say 106, break the loop.

Page 257: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.6. Exercises and Projects 249

Modify the rest of the CTMC program including a new equivalentof the nbody() which is effectively a one-electron problem, and amodified classify() for outcome determination.

Repeat calculations described in Project P12.8 using the straightlineapproximation, with the same initial conditions. First compare theresults visually, then plot the differences. The latter is best done bywriting the data sets to files via pickle (see Program 9.1 for an ex-ample), and process the data in a separate program. For ionization,for example, how large are the differences? Are they as expected?

P12.11 Calculate the differential ionization cross sections, dσ/dE, for proton-hydrogen collisions at several collision speeds, e.g., v = 1, 2, and 3,using the CTMC method.

(a) First, study the impact parameter dependence of ionization.Modify Program 12.2 or one developed from a similar project. Recordthe ionization events and the associated impact parameter. Plot theimpact parameter distribution as a histogram, ensuring sufficientstatistics for the bin size used (no smaller than ∆b = 0.2).

(b) Modify the outcome classification function so it also returns theenergy of the electron. Bin the ionization events into regular energyintervals of width ∆E. Let NE be the number of ionization eventsin the energy interval E to E+∆E. The differential ionization crosssection may be generalized from Eq. (12.72) as

dE= πb2max

NE

∆ENmc.

To obtain reasonable statistics, you will need to increase Nmc manyfolds compared to total cross sections. Speed is essential. Use thestraightline code from Project P12.10 if available. If not, considerwriting it, or parallelize the code execution if possible. Compare theenergy distributions with the Born approximation (Project P12.7).

Analyze the relationship between the energy distribution and the im-pact parameter ranges. Is there any correlation between the impactparameter and the energy distributions? Design a method to plotthe data in such a way as to get your idea across clearly.

(c) Instead of the microcanonical ensemble, use circular orbits of theBohr model for the electron (CTMC-Bohr). Compute the energy

Page 258: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

250 Chapter 12. Classical and quantum scattering

distribution of ejected electrons for the collision speed near the max-imum (v ∼ 1.5, Figure 12.9). Plot the CTMC-Bohr and the CTMCresults (above). Compare the two, and discuss the differences. Ex-plain why the CTMC-Bohr results are higher.

P12.12∗ Investigate electron capture and evidence of the Thomas double scat-tering mechanism in proton-hydrogen collisions.

(a) Modify Program 12.2 or a program developed already from aproject above to analyze capture events as a function of scatteringangles. Modify the event classification function to record the scat-tering of the projectile in each capture event. Run the program forcapture from the ground state (microcanonical ensemble) at v = 1.After cumulating sufficient statistics, write the data to a file for of-fline analysis.

Bin the capture events into angular intervals of width ∆θP ∼ 0.1mrad. Plot the raw distribution as a histogram up to a few milli-radians. Then convert the raw data into a differential cross sectiondσ/dΩP analogous to Eq. (12.89). Plot the cross sections. Is thereany pronounced feature above the background?

(b) Use the circular orbits of the Bohr model for the initial conditionof the electron (CTMC-Bohr). Repeat the above calculation at v =1 and 1.5 (or higher depending on computer speed). Discuss anyevidence of the Thomas mechanism.

(c)∗ Study capture from a Rydberg state. Assume a circular statefor n = 10, i.e., assume zero eccentricity. Modify the initializationfunction so the orientation the orbital plane is fixed. This can be con-trolled by θ in the Euler angles (12.125). You must also adjust otherparameters appropriately, including the orbital speed vorb. Since thesize of the atom scales with n2, scale R and bmax accordingly, as wellas the step size h.

Run the program for two orientations, θ = 0 and 90, in which theorbital plane is parallel and perpendicular to the z-axis, respectively.Choose the scaled velocity v∗ = v/vorb = 1.5 ∼ 3. In principle, thehigher the v∗, the better. However, the capture cross section falls offquickly, so choose a higher value tolerable for the computing speedat hand. Plot dσ/dΩP in each case. Note the structures in the 90

Page 259: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.A. The phase shift integral 251

orientation. What is the possible cause? Are they attributable tothe two-step capture mechanism?

Verify your hypothesis. Turn off the internuclear interaction in thenbody() code, i.e., set fij = 0 if the pair involved is the proton-target internuclear force. This isolates the projectile-electron scat-tering with negligible effect on the capture cross sections (see ProjectP12.10). Compare the new differential cross sections with the re-sults above. Discuss the difference in the 90 orientations with andwithout the internuclear interaction. Explain the structures in thedifferential cross sections using two-body kinematics.

12.A The phase shift integral

To formally perform some angular integrals, we need two properties of thespherical harmonics, Ylm(θ, ϕ), which are the angular wave functions in cen-tral field potentials, ψ = Rl(k, r)Ylm(θ, ϕ). First, the spherical harmonicsare orthonormal,

Ylm(θ, ϕ)Y∗l′m′(θ, ϕ) dΩ = δll′δmm′ . (12.77)

Second, spherical harmonics of different arguments (angles) can be added.Let r1 and r2 be two unit vectors with direction angles (θ1, ϕ1) and (θ2, ϕ2),respectively. The addition theorem of spherical harmonics states [2]

Pl(r1 · r2) =4π

2l + 1

l∑

m=−l

Ylm(θ1, ϕ1)Y∗lm(θ2, ϕ2). (12.78)

With the relation (12.78), we can express the plane wave (A:12.24) as

ei~k·~r = 4π

l,m

iljl(kr)Ylm(θk, ϕk)Y∗lm(θr, ϕr). (12.79)

Similarly, we can rewrite the wave function (A:12.25) as

ψ(~r) = 4π∑

l,m

ilRl(k, r)Ylm(θki , ϕki)Y∗lm(θr, ϕr). (12.80)

Page 260: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

252 Chapter 12. Classical and quantum scattering

Assuming ~kf as the final wave vector, i.e., ~kf · z = k cos θ, we expressthe scattering amplitude (12.10) as

f(θ) = − m

2π~2

e−ikf ·~rV (r)ψ(~r) d3r. (12.81)

Substituting Eq. (12.80) into (12.81), and using Eq. (12.79), we obtain

f(θ) = − m

2π~2(4π)2

∫∑

l,m

(−i)ljl(kr)Y ∗lm(θkf , ϕkf )Ylm(θr, ϕr)

× V (r)∑

l′,m′

il′

Rl′(k, r)Yl′m′(θki , ϕki)Y∗l′m′(θr, ϕr) d

3r

= − m

2π~2(4π)2

l,m,l′,m′

(−i)lil′Y ∗lm(θkf , ϕkf )Yl′m′(θki , ϕki)

×∫

jl(kr)V (r)Rl′(k, r)r2 dr

×∫

Ylm(θr, ϕr)Y∗l′m′(θr, ϕr) dΩr. (12.82)

Using the orthogonality (12.77), the last term in Eq. (12.82) is reducedto δll′δmm′ . We can eliminate the double sum in Eq. (12.82) to a tidier form

f(θ) = − m

2π~2(4π)2

l,m

Y ∗lm(θkf , ϕkf )Ylm(θki , ϕki)

×∫

jl(kr)V (r)Rl(k, r)r2 dr, (12.83)

where we have set (−i)lil = 1. Applying the addition theorem (12.78) to(12.83) once more, we obtain

f(θ) = −2m~2

l

(2l + 1)

jl(kr)V (r)Rl(k, r)r2 dr Pl(cos θ), (12.84)

where we have used cos θ = kf · ki.Equation (12.84) is exact, and is equivalent to Eq. (A:12.31). Comparing

the two expressions, we identify the phase shift as

eiδl sin δl = −2mk

~2

∫ ∞

0

jl(kr)V (r)Rl(k, r)r2 dr. (12.85)

This is the phase shift as an integral involving the radial wave functions.

Page 261: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.B. Direct determination of cross sections 253

12.B Direct determination of cross sections

We can calculate classical scattering cross sections by direct numerical ex-periments. We start with a particle far away at a given impact parameterb and integrate the equations of motion until the interaction is well over toobtain the scattering angle. This process is repeated for many particles atdifferent b values. Data is cumulated and converted to cross sections.

In the numerical experiment, a beam of N particles, uniformly dis-tributed over the circular area between b = 0 and bmax, comes in witha given energy E toward the target as shown in Figure A:12.1. The particleflux is given by

I =N

πb2max

. (12.86)

We divide the impact parameter range [0, bmax] into bins of equal width ∆b.Let ∆Ni be the number of particles entering through the ring from bi tobi +∆b and being scattered to a solid angle ∆Ωi between θi +∆θ. We canrelate these parameters as

∆Ni = I2πbi∆b, N =∑

i

∆Ni = 2πI∆b∑

i

bi, ∆Ωi = 2π sin(θi)∆θ.

(12.87)Equation (12.87) yields the ratio ∆Ni/N ,

∆Ni

N=

bi∑

i bi, (12.88)

which is independent of the flux I.Substituting Eqs. (12.86) and (12.87) into Eq. (A:12.5), we obtain the

cross section at θi as

σ(θi) =∆Ni

I∆Ωi

=b2max

2 sin θi∆θ

∆Ni

N. (12.89)

Since Eq. (12.89) requires only the ratio ∆Ni/N given by Eq. (12.88),there is no need for us to keep track of absolute numbers of particles in thenumerical experiment. We just need the correct ratios (12.88). So, insteadof Eq. (12.87), we use the relative number of particles as

∆Ni = bi, N =∑

i

bi. (12.90)

Page 262: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

254 Chapter 12. Classical and quantum scattering

In cases where multiple impact parameters contribute to the same scatteringangle θi, we shall include them all in ∆Ni.

This method is investigated in Project P12.2.

12.C WKB scattering wave functions

We determine the phase φ in Eq. (12.25) by the condition that the asymp-totic WKB wave function must be the same as the incident wave in theabsence of any interactions.

From Eq. (12.25), if V = 0, p = ~√

k2 − l2c/r2, and we can write theintegral in Eq. (12.25) as

Φ0(r) ≡1

~

∫ r

rmin

p dr = k

∫ r

lc/k

1

r

r2 − l2ck2dr, (12.91)

where we have substituted the turning point rmin = lc/k.Using the integrals

∫1

x

√x2 − a2 dx =

√x2 − a2 + a sin−1

(a

x

)

, (12.92a)

∫1

x

√a2 ± x2 dx =

√a2 ± x2 + a

2ln

∣∣∣∣

a−√a2 ± x2

a+√a2 ± x2

∣∣∣∣, (12.92b)

we can integrate Eq. (12.91) to obtain

Φ0(r) = k

r2 − l2ck2

+ lc sin−1

(lckr

)

− lcπ

2. (12.93)

In the asymptotic region r →∞, we have

Φ0(r)r→∞−→ kr −

(

l +1

2

2, (12.94)

where we have substituted lc = l + 12from Eq. (12.23). The asymptotic

WKB solution wl (12.25) approaches

wl(r)r→∞−→ A√

~ksin(kr − lπ/2− π/4 + φ). (12.95)

Comparing Eq. (12.95) with the free-particle solution (A:12.27),

krjl(kr)r→∞−→ sin(kr − lπ/2), (12.96)

we find that wl(r) agrees with Eq. (12.96) if φ = π/4. This gives the WKBsolution in Eq. (12.26).

Page 263: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.D. The Born T -Matrix 255

12.D The Born T -Matrix

12.D.1 The T -Matrix for excitation

Given the interaction (12.43), we can substitute it into (12.42) to evaluatethe T -matrix as

TB1fi = ZP

e−i~q·~rφ∗f(~r

′)

(1

r− 1

|~r − ~r′|

)

φi(~r′) d3rd3r′. (12.97)

Using the orthogonality∫φ∗f(~r

′)φi(~r′)d3r′ = δfi, we can simplify Eq. (12.97)

to

TB1fi = ZP

(

δfi

∫e−i~q·~r

rd3r −

e−i~q·~rφ∗f(~r

′)1

|~r − ~r′|φi(~r′) d

3rd3r′)

.

(12.98)To separate the nested integrals in the second term of Eq. (12.98), we

make a variable substitution ~R = ~r − ~r′ to obtain

TB1fi = ZP

(

δfi

∫e−i~q·~r

rd3r −

∫e−i~q·~R

Rd3R

φ∗f(~r

′)e−i~q·~r′φi(~r′) d3r′

)

= ZP V (q)[δfi − Ffi(q)],(12.99)

where V (q) is the Fourier transform of the 1/r potential

V (q) =

∫e−i~q·~r

rd3r, (12.100)

and Ffi(q) is the atomic form factor defined as

Ffi(q) =

φ∗f(~r

′)e−i~q·~r′φi(~r′) d3r′. (12.101)

Taking ~q as the z direction, the angular part of Eq. (12.100) can beintegrated out as

V (q) = 2π

∫ ∞

0

r dr

∫ π

0

e−iqr cos θ sin θ dθ =2π

iq

∫ ∞

0

[eiqr − e−iqr

]dr.

(12.102)

Page 264: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

256 Chapter 12. Classical and quantum scattering

Equation (12.102) is said to be weakly convergent, so we introduce a con-verging factor exp(−εr) (ε > 0, essentially a screening constant, see ExerciseE12.3), and in the limit ε→ 0+, we have

V (q) =2π

iqlimε→0

∫ ∞

0

e−εr[eiqr − e−iqr

]dr

= 4π limε→0

1

ε2 + q2=

q2. (12.103)

Substitution of Eq. (12.103) into (12.99) yields

TB1fi =

4πZP

q2[δfi − Ffi(q)]. (12.104)

Consider transitions from the ground state (1s) to the excited state(nf lfmf ). The initial and final atomic states are given by φi = R10Y00and φf = Rnf lfYlfmf

, respectively. The first few radial wave functions Rnl

are given in Project P8.9. Substituting φi,f into Eq. (12.101), and usingthe expansion exp(i~q · ~r) from Eq. (12.79), we can express the atomic formfactor as (analogous to (12.82))

Ffi(q) =√4π∑

l,m

(−i)l∫

Rnf lf (r)jl(qr)R10(r)r2 dr

×Y ∗lm(θq, ϕq)

Y ∗lfmf

(θr, ϕr)Ylm(θr, ϕr)dΩr

=√4π(−i)lfY ∗

lfmf(θq, ϕq)

∫ ∞

0

Rnf lf (r)jlf (qr)R10(r)r2 dr, (12.105)

where we have used Y00 = 1/√4π and the orthogonality of the spherical

harmonics (12.77).We are interested in ni = 1 to nf = 2 transitions, including 1s to 2s and

2pm (m = 0,±1) states. The relevant radial wave functions are

R10(r) = 2a−3/20 e−r/a0 ,

R20(r) = (2a30)−1/2

(

1− r

2a0

)

e−r/2a0 , (12.106)

R21(r) = (24a30)−1/2 r

a0e−r/2a0 ,

Page 265: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.D. The Born T -Matrix 257

where a0 is the Bohr radius. The radial integrals in Eq. (12.105) can beevaluated as

I2s,1s =

∫ ∞

0

R20(r)j0(qr)R10(r)r2 dr

=√2

∫ ∞

0

e−3r/2j0(βr)(1−r

2)r2 dr, (12.107a)

I2p,1s =

∫ ∞

0

R21(r)j1(qr)R10(r)r2 dr

=1√6

∫ ∞

0

e−3r/2j1(βr)r3 dr, (12.107b)

where we have made a change of variable r → ra0, and set β = qa0. Usingthe expression j1(x) from Eq. (A:12.56), the integral can be shown to be(Exercise E12.8)

I2s,1s = 4√2

q2a20(94+ q2a20)

3, (12.108a)

I2p,1s = 2√6

qa0(94+ q2a20)

3. (12.108b)

Substituting Eqs. (12.108a) and (12.108b) into (12.105), we obtain theform factors for 1s→ 2s and 2pm transitions

F2s,1s = I2s,1s = 4√2

q2a20(94+ q2a20)

3, (12.109a)

F2pm,1s = 4√6π(−i) qa0

(94+ q2a20)

3Y ∗1,m(θq, ϕq). (12.109b)

12.D.2 Free-particle ionization

To obtain the free-particle Born ionization cross section, we substituteEq. (12.62) into (12.40), finding

σFBk (k) =

1

2π~2v2

∫ qmax

qmin

|T FBk,1s(q)|2q dq,

=8πZ2

P

~2v2

∫ qmax

qmin

∣∣∣φ(~k + ~q)− φ(k)F1s(q)

∣∣∣

2 dq

q3. (12.110)

Page 266: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

258 Chapter 12. Classical and quantum scattering

Equation (12.110) is a differential cross section for a given ejected elec-

tron with momentum ~~k. The total ionization cross section, σFBi , is a

summation over all possible ~k,

σFBi =

~k

σFBk (k) =

1

(2π)3

σFBk (k) d3k. (12.111)

The factor (2π)−3 comes from converting the sum in box-normalization toan integral as

~k

→ V(2π)3

d3k, (12.112)

where V is the volume of the box in which the plane wave is normalized.Equation (12.112) is just the density of states from the box-counting method

used earlier, Eq. (11.15), recalling that ~p = ~~k. For the plane waves chosenin Eq. (12.59a), V = 1, hence the afore-mentioned factor in Eq. (12.111).

Because qmin depends on k in Eq. (12.57), we cannot interchange theorder of integration for dq and d3k in Eq. (12.111). However, it is possibleto integrate over the direction dΩk first, then over dq next, and finally overdk. But the middle integration over dq is still not sufficiently simple.

Since we are primarily interested in the total cross section, we make anapproximation to simplify the process. We assume that there is an averageenergy transfer ε independent of k. The value ε must be on the order ofI, the ionization potential. A reasonable value to use is ε = 2I. Then, wecan interchange the order of integration between dq and d3k. Doing so, weobtain from Eq. (12.111)

σFBi '

8πZ2P

~2v2

∫ qmax

qmin

dq

q31

(2π)3

d3k∣∣∣φ(~k + ~q)− φ(k)F1s(q)

∣∣∣

2

, (12.113)

where qmin = ε/~v. If we could guess ε correctly, Eq. (12.113) would beexact.

Page 267: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.D. The Born T -Matrix 259

Consider the∫d3k integral in Eq. (12.113),

J =1

(2π)3

d3k∣∣∣φ(~k + ~q)− φ(k)F1s(q)

∣∣∣

2

= J1 − 2F1s(q)J2 + F 21s(q)J3,

(12.114a)

J1 =1

(2π)3

d3k∣∣∣φ(~k + ~q)

∣∣∣

2

, J3 =1

(2π)3

d3k∣∣∣φ(~k)

∣∣∣

2

,

(12.114b)

J2 =1

(2π)3

d3kφ(~k + ~q)φ(~k).

(12.114c)

The integral J1 is actually the same as J3 with the substitution ~k+~q → ~k.Furthermore, they are equal to 1, J1 = J3 = 1, because the momentum wavefunction (12.59b) is normalized. The second integral J2 can be performedas

J2 =1

(2π)3

d3k

d3rφ∗1s(r)e

i(~k+~q)·~r∫

d3r′φ1s(r′)e−i~k·~r′

=

d3rφ∗1s(r)e

i~q·~r∫

d3r′φ1s(r′)

1

(2π)3

d3kei~k·(~r−~r′)

︸ ︷︷ ︸

δ(~r−~r′)

=

d3rφ∗1s(r)e

i~q·~r∫

d3r′φ1s(r′)δ(~r − ~r′)

=

d3rφ∗1s(r)e

i~q·~rφ1s(r) = F1s(q). (12.115)

Collecting J1, J2, and J3, we find J = 1 − F 21s(q). Putting J into

Eq. (12.113) and setting qmax =∞, we have a simplified total cross section

σFBi =

8πZ2P

~2v2

∫ ∞

qmin

[1− F 2

1s(q)] dq

q3=

8πZ2P

~2v2

[1

2q2min

−∫ ∞

qmin

F 21s(q)

dq

q3

]

.

(12.116)The remaining integral can be carried out analytically (Exercise E12.11),giving us the final result for the total ionization cross section in the free-particle Born approximation

σFBi =

4πZ2Pa

20

~2v2

[

ln1 + β

β− (4 + 3β)2

12(1 + β)3

]

, β =a20q

2min

4. (12.117)

Page 268: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

260 Chapter 12. Classical and quantum scattering

12.E The microcanonical ensemble

The CTMC method samples the initial conditions of hydrogenic atoms froma microcanonical ensemble. Let Z and mT be the charge and mass of thetarget nucleus, respectively. The ground state energy E is (all in a.u. here-after)

E = −12

µ

meZ2, µ =

memT

me +mT, (12.118)

with µ as the reduced mass between the electron and the target nucleus.

The semimajor axis of the elliptic orbits is from Eq. (A:4.12)

a = − Z

2E=me

µ

1

Z. (12.119)

To sample the position of the electron uniformly in time, we express theorbit equation (A:4.13) in terms of the eccentric anomaly ψ in Eq. (A:4.38)rather than θ as

r = a(1 + e cosψ), (12.120)

where e is the eccentricity. The range of ψ is still from 0 to 2π, the same asθ. In Eq. (A:4.38), we use the convention that the aphelion is at ψ = 0 andthe perihelion at ψ = π (see Figure A:4.3). At these two turning points,the values of ψ and θ are equal.

The elapsed time t and the eccentric anomaly ψ are related by Kepler’sequation (A:4.39),

2πt

T= ψ + e sinψ, (12.121)

where T is the orbital period. Equation (12.121) shows that when t changesover one period, ψ goes from 0 to 2π.

In Monte Carlo sampling of the initial condition, a random t/T is se-lected, and Kepler’s equation is solved for ψ. The radius r can be de-termined from Eq. (12.120), and the angular position θ can be found bycombining Eqs. (A:4.13) and (A:4.38)

cos θ =cosψ + e

1 + e cosψ, sin θ =

√1− e2 sinψ1 + e cosψ

. (12.122)

Page 269: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.F. Time-dependent leapfrog method 261

It can be shown that the position and velocity in Cartesian coordinatesare given by (Exercise E12.13)

x = aµ

me(cosψ + e), y = a

µ

me

√1− e2 sinψ, (12.123a)

vx = − sinψ

1 + e cosψv0, vy =

√1− e2 cosψ1 + e cosψ

v0, (12.123b)

where v0 is the average orbital speed defined as

v0 =

−2Eµ

= Z. (12.124)

The orbit thus obtained is in the x-y plane. To randomize the orientationof the orbit, we rotate it through a set of Euler angles, which are threesuccessively rotations [13]. Imagine the axes of a coordinate system to berotated initially coincide with the parent axes. Each rotation results in aset of new, primed axes. The Euler rotation is as follows: a first rotationof φ about the z-axis, a second rotation of θ about the x′-axis, and a thirdrotation of ψ (not the eccentric anomaly) about the z′′-axis. Taking therotations to act on the vectors (active rotation), the net Euler rotationmatrix is

A = [Az′′(ψ)Ax′(θ)Az(φ)]T

=

c1c3 − s1c2s3 −c1s3 − s1c2c3 s1s2s1c3 + c1c2s3 −s1s3 + c1c2c3 −c1s2

s2s3 s2c3 c2

, (12.125)

where Ai are the rotation matrices (A:4.76) about axis i, and ci, si are

c1 = cosφ, s1 = sinφ, c2 = cos θ, s2 = sin θ,

c3 = cosψ, s3 = sinψ. (12.126)

To rotate the axes rather than vectors, use the transpose of Eq. (12.125).

12.F Time-dependent leapfrog method

When the projectile is heavy compared to the electron, its trajectory canbe approximated by a straight line with constant velocity. The trajectory

Page 270: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

262 Chapter 12. Classical and quantum scattering

of the projectile is described by

~rP (t) = ~b+ vtz. (12.127)

We have assumed that the z coordinate of the projectile is z = 0 at t = 0.Assuming the target nucleus is also infinitely heavy, the three-body

problem is reduced to an effective one-electron problem moving in a time-dependent field.

We define the time-dependent transformation for the leapfrog methodas

Ω(~r, t) =1

r+

1

|~r − ~rP |. (12.128)

The rest of the derivation follows from the time-independent case, providedthat Eq. (12.67) is modified to take into account the explicit dependenceon time,

dW

dt= ∇Ω · ~v + ∂Ω

∂t. (12.129)

Accordingly, the transformed equations of motion in the fictitious times are

d~r

ds=

1

W~v,

dt

ds=

1

W, (12.130a)

d~v

ds=

1

Ω~a,

dW

ds=

1

Ω

(

~Γ · ~v + ∂Ω

∂t

)

, (12.130b)

with

~a =1

me

(

−ZT~r

r3− ZP

~r − ~rP|~r − ~rP |3

)

, (12.131a)

~Γ = ∇Ω = − ~rr3− ~r − ~rP|~r − ~rP |3

,∂Ω

∂t=v(z − vt)|~r − ~rP |3

. (12.131b)

Application of the leapfrog method to Eqs. (12.130a)–(12.130b) is straight-forward to arrive at the equivalents to Eqs. (12.70a)–(12.70c) of the time-independent case.

12.G Program listings and descriptions

Page 271: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.G. Program listings and descriptions 263

Program listing 12.2: CTMC simulations (ctmc.py)

1 import ode, vpm, rootfinder as rtfimport numpy as np, visual as vp, random as rnd

3

def eulermat(phi, theta, psi ): # Euler rotation on vectors5 c1, s1 = np.cos(phi), np.sin(phi) # transpose to rotate axes

c2, s2 = np.cos(theta), np.sin(theta)7 c3, s3 = np.cos(psi), np.sin(psi)

c4, s4 = c1∗c2, s1∗c2 # cos(φ) cos(θ), sin(φ) cos(θ)9 return np.array( [[c1∗c3 − s4∗s3, −c1∗s3 − s4∗c3, s1∗s2 ],

[s1∗c3 + c4∗s3, −s1∗s3 + c4∗c3, −c1∗s2],11 [ s2∗s3 , s2∗c3 , c2 ]])

13 def nbody(id, r, v, t ): # N−body Coulomb systemif (id == 0): return v # velocity

15 a = np.zeros((N,3)) # accelerationOmega, Gamma = 0.0, np.zeros((N,3)) # Omega, grad(Omega)

17 for i in range(N):for j in range(i+1, N):

19 rij = r[i]−r[ j ]r2 = rij [0]∗ rij [0] + rij [1]∗ rij [1] + rij [2]∗ rij [2]

21 r1 = np.sqrt(r2)fij , Zij = rij/(r2∗r1), Z[i ]∗Z[j ] # Coulomb force

23 a[ i ] += Zij∗fija[ j ] −= Zij∗fij # 3rd law

25 Omega += 1./r1Gamma[i] −= fij # Eq. (12.69)

27 Gamma[j] += fija[ i ] = a[i ]/mass[i ]

29 return a, Omega, Gamma # omit Omega, Gamma for reg. leapfrog

31 def initialize (b, vcol ): # microcanonical samplingkepler = lambda x: phi−x−ecc∗np.sin(x) # Kepler’s eqn

33 r , v, pi = np.zeros((N,3)), np.zeros((N,3)), np.pir [P], v[P] = np.array([b, 0., −R]), np.array ([0., 0., vcol ])

35 ecc, phi = np.sqrt(rnd.random()), rnd.random()∗2∗pipsi = rtf . bisect (kepler ,0.,2∗ pi ,1. e−12) # eccentric anomaly

37 cos, sin = np.cos(psi), np.sin(psi)rm = mass[T]/(mass[T]+mass[e]) # mass ratio

39 a, eroot = 1./(Z[T]∗rm), np.sqrt((1−ecc)∗(1+ecc))

Page 272: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

264 Chapter 12. Classical and quantum scattering

r [e] = a∗rm∗np.array([cos+ecc, sin∗eroot, 0.])41 v[e] = Z[T]∗np.array([−sin, cos∗eroot, 0.])/(1+ecc∗cos)

phi, theta, psi = rnd.random(), rnd.random(), rnd.random()43 mat = eulermat(phi∗2∗pi, theta∗pi, psi∗2∗pi)

r [e ], v[e] = np.dot(mat, r[e ]), np.dot(mat, v[e]) # Euler angles45 return r, v

47 R, N = 20.0, 3 # P−T dist, N bodiesP, T, e = 0, 1, 2 # Proj, Target, electron

49 Z = np.array([1.0, 1.0, −1.0]) # charges: Zp, Zt, Zemass = [1836., 1836., 1.0] # masses: Mp, Mt, Me

51 t , h, bmax, vcol = 0.0, 0.01, 3., 1.5 # vcol = collision speed

53 body, path, ic , Nmc = [], [], 0, 1000r , clr = [[0.,0.,− R]]∗3, [[1,1,0], [0,1,0], [1,1,1]]

55 scene = vp.display(background=(.2,.5,1), up=(1,0), forward=(0,−1))for i in range(N): # draw scene

57 body.append(vp.sphere(pos=r[i], radius=0.4, color=clr[i ]))path.append(vp.points(pos=r[i], size=2, color=clr[ i ]))

59

for j in range(Nmc):61 b = np.sqrt(rnd.random())∗bmax # sample b

r , v = initialize (b, vcol) # initialize63 a, w, Gamma = nbody(1, r, v, t) # get w

while (abs(r[P ][2]) <= R): # exit trial if z>R65 r , v, t , w = ode.leapfrog ttN(nbody, r, v, t , w, h)

ic , key = ic+1, vpm.wait(scene) # pause on key press67 vp.rate(2000)

if ( ic%10 == 0):69 for i in range(N):

body[i ]. pos = r[i ] # move particles71 path[i ]. append(pos=r[i])

for i in range(N): path[i ]. pos=[] # erase trails

We can study three-body atomic reactions and dynamics using Pro-gram 12.2 within the CTMC framework. The main advantage of the pro-gram is that it uses the efficient and robust self-adjusting, N -body leapfrogalgorithm to integrate the equations of motion. The algorithm is efficientbecause it calculates the forces only once per step, but still maintains highaccuracy due to close encounters being smoothed (regularized) by a proper

Page 273: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

12.G. Program listings and descriptions 265

transformation (Section 12.5.1).The function eulermat() returns the rotation matrix of Euler angles

(12.125), assuming active rotation of vectors. If we wish to rotate the axes(see interpretations of rotation in Section A:4.B), we just use the transposeof the resultant matrix.

The dynamics of the N -body Coulomb system is calculated by nbody().It is written in a general manner even though we are dealing with a spe-cific three-body system. Its function is the same as the identically namedfunction in the molecular dynamics code (Program A:11.7), but tailoredto Coulomb interactions, and with the additional feature of computing Ωand ~Γ = ∇Ω for the N -body leapfrog method with time transformation,Eqs. (12.66) and (12.69). Even though the flag id=0 is not used by thetransformed leapfrog method, it is kept in place so it can be used by thestandard leapfrog method. In that case, remove Ω and Γ from the list ofreturned variables at the end.

This N -body function is direct, and is suitable for few-body problems,N . 5. For larger N , use Program 12.3.

We sample the initial conditions from a microcanonical ensemble of hy-drogenic atoms via initialize(), taking as argument the impact parame-ter and the collision speed. It first defines Kepler’s equation (12.121) as aninline function. The projectile is assumed to start at (b, 0,−R) moving inthe z direction initially, where R is a large distance from the target. Next,the eccentricity e (ecc) is sampled according to CTMC sampling, and a ran-dom phi is selected between [0, 2π], equivalent to sampling time uniformlyover the orbital period (12.121). The eccentric anomaly ψ is obtained bysolving Kepler’s equation with the bisection method. Once ψ is known, theposition and the velocity of the electron are found from Eqs. (12.123a) and(12.123b). Finally, they are rotated by a random set of Euler angles tocomplete the microcanonical sampling.

The main program starts by defining the necessary parameters, includ-ing R, charges and masses of the projectile, target nucleus, and the elec-trons, which are labeled [P, T, e] respectively for easy reference.

After setting up the visual scene including the bodies and their trails,the program enter the main loop for Nmc trials. It first selects the impactparameter uniformly between [0, b2max], and initializes the collision system,including Ω. The parameter bmax should be adjusted if production cal-culations are performed. The while loop integrates the system using theN -body transformed leapfrog method, animating the system periodically

Page 274: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

266 Chapter 12. Classical and quantum scattering

(10 steps in this case). The integration terminates if the z position of theprojectile exceeds R, which may also need to be adjusted depending onprojectile and energy. At the end of the trial, the trails are cleared beforethe next trial run begins.

For production runs, the events must be classified at the end of each run(Table 12.2). The cumulative events can be analyzed to obtain the crosssections (Project P12.8).

Program listing 12.3: N -body Coulomb system (nbody.py)

1 def nbody(id, r, v, t ): # N−body Coulomb system, optimizedif (id == 0): return v # velocity

3 a = np.zeros((N,3)) # accelerationOmega, Gamma = 0.0, np.zeros((N,3)) # Omega, grad(Omega)

5 for i in range(N):rij = r[i]−r[ i+1:] # rij for all j>i

7 r2 = np.sum(rij∗rij , axis=1) # |rij |ˆ2r1 = np.sqrt(r2)

9 r3 = r1∗r2Zij = Z[i]∗Z[i+1:] # can be pre−computed

11 for k in [0,1,2]:fij = rij [:, k]/r3

13 Gamma[i,k] −= np.sum(fij) # Eq. (12.69)Gamma[i+1:,k] += fij

15 fij = Zij∗ fij # Coulomb forcea[ i ,k] += np.sum(fij)

17 a[ i+1:,k] −= fij # 3rd lawOmega += np.sum(1./r1)

19 a[ i ] = a[i ]/mass[i ]return a, Omega, Gamma

This function is intended for larger N -body Coulomb systems. It isequivalent to the same function in Program 12.2, but much faster for largerN . Make sure the charges Z are stored in a NumPy array. If speed isparamount, consider using F2Py as in Program 11.3.

Page 275: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Bibliography

[1] M. H. Anderson, J. R. Ensher, M. R. Matthews, C. E. Wieman, andE. A. Cornell. Observation of Bose-Einstein condensation in a diluteatomic vapor. Science, 269:198–201, (1995).

[2] G. B. Arfken and H. J. Weber. Mathematical methods for physicists.(Academic Press, New York), 6th edition, 2005.

[3] M. Belloni and W. Christian. Time development in quantum mechanicsusing a reduced Hilbert space approach. Am. J. Phys., 76:385–392,(2008).

[4] B. H. Bransden and C. J. Joachain. Physics of atoms and molecules.(Prentice Hall, New York), 2002.

[5] T. A. Brody, J. Flores, J. B. French, P. A. Mello, A. Pandey, andS. S. M. Wong. Random-matrix physics: spectrum and strength fluc-tuations. Rev. Mod. Phys., 53:385–479, (1981).

[6] J. W. Cooley and J. W. Tukey. An algorithm for the machine calcula-tion of complex Fourier series. Math. Comp., 19:297–301, (1965).

[7] S. N. Coppersmith. A simpler derivation of Feigenbaum’s renormaliza-tion group equation for the period-doubling bifurcation sequence. Am.

J. Phys., 67:52–54, (1999).

[8] A. S. Davydov. Quantum mechanics. (Pergamon, New York), 1976.

267

Page 276: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

268 BIBLIOGRAPHY

[9] J.-L. Deneubourg, S. Aron, S. Goss, and J. M. Pasteels. The self-organizing exploratory pattern of the Argentine ant. J. Insect Behav.,3:159–168, (1990).

[10] P. DeVries and J. Hasbun. A first course in computational physics.(Jones & Bartlett, Sudbury, MA), 2010.

[11] W. L. Fite, A. C. H. Smith, and R. F. Stebbings. Charge transferin collisions involving symmetric and asymmetric resonance. Proc. R.Soc. Lond., A268:527–536, (1962).

[12] M. Gardner. Mathematical games: The fantastic combinations of JohnConway’s new solitaire game “life”. Sci. Am., 223:120–123, (1970).

[13] H. Goldstein, C. Poole, and J. Safko. Classical mechanics. (AddisonWesley, New York), 2002.

[14] E. F. Goodrich. Numerical determination of short-period Trojan orbitsin the restricted three-body system. Astrono. J., 71:88–93, (1966).

[15] H. Gould and J. Tobochnik. Statistical and thermal physics with com-

puter applications. (Princeton University Press, Princeton, NJ), 2010.

[16] L. Greengard and V. Rokhlin. A fast algorithm for particle simulations.J. Comp. Phys., 73:325–348, (1987).

[17] D. J. Griffiths. Introduction to quantum mechanics. (Prentice Hall,Upper Saddle River, NJ), 2005.

[18] M. C. Gutzwiller. Chaos in classical and quantum mechanics.(Springer, New York), 1990.

[19] W. Heitler. The quantum theory of radiation. (Dover, New York),1984.

[20] E. J. Heller. Bound-state eigenfunctions of classically chaotic Hamilto-nian systems: scars of periodic orbits. Phys. Rev. Lett., 53:1515–1518,(1984).

[21] C. J. Joachain. Quantum collision theory. (North-Holland, Amster-dam), 1983.

Page 277: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

BIBLIOGRAPHY 269

[22] D. L. Kaufman, I. Kosztin, and K. Schulten. Expansion method for sta-tionary states of quantum billiards. Am. J. Phys., 67:133–141, (1999).

[23] B. S. Kerner. Experimental features of self-organization in traffic flow.Phys. Rev. Lett., 81:3797–3800, (1998).

[24] H. J. Korsch, E. M. Graefe, and H.-J. Jodl. The kicked rotor:Computer-based studies of chaotic dynamics. Am. J. Phys., 76:498–503, (2008).

[25] G. D. Mahan. Quantum mechanics in a nutshell. (Princeton UniversityPress, Princeton, NJ), 2009.

[26] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, andE. Teller. Equation of state calculations by fast computing machines.J. Chem. Phys., 21:1087–1092, (1953).

[27] M. H. Mittleman. Introduction to the theory of laser-atom interactions.(Plenum, New York), 1993.

[28] R. E. Olson and A. Salop. Charge-transfer and impact-ionization crosssections for fully and partially stripped positive ions colliding withatomic hydrogen. Phys. Rev. A, 16:531–541, (1977).

[29] J. T. Park, J. E. Aldag, J. M. George, and J. L. Peacher. Cross sectionsfor excitation of atomic hydrogen to the n = 2, 3, and 4 states by 15-200-keV protons. Phys. Rev. A, 14:608–614, (1976).

[30] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling.Numerical recipes: the art of scientific computing. (Cambridge Univ.Press, Cambridge), 1992.

[31] J. J. Sakurai and J. Napolitano. Modern quantum mechanics. (AddisonWesley, New York), 2011.

[32] D. V. Schroeder. An introduction to thermal physics. (Addison Wesley,New York), 1999.

[33] M. B. Shah and H. B. Gilbody. Experimental study of the ionisation ofatomic hydrogen by fast H+ and He2+ ions. J. Phys. B, 14:2361–2377,(1981).

Page 278: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

270 BIBLIOGRAPHY

[34] R. V. Sole, E. Bonabeau, J. Delgado, P. Fernandez, andJ. Marın. Pattern formation and optimization in armyant raids. Technical report, Santa Fe Institute, (1999).http://EconPapers.repec.org/RePEc:wop:safiwp:99-10-074.

[35] S. H. Strogatz. Nonlinear dynamics and chaos. (Westview Press, Cam-bridge, MA), 1994.

[36] L. H. Thomas. On the capture of electrons by swiftly moving electrifiedparticles. Proc. R. Soc. Lond., A114:561–576, (1927).

[37] J. Wang and J. D. Champagne. Simulation of quantum systems withthe coupled channel method. Am. J. Phys., 76:493–497, (2008).

[38] J. Wang and R. E. Olson. Dominance of the Thomas mechanism forelectron capture from orientated Rydberg atoms. Phys. Rev. Lett.,72:332–335, (1994).

Page 279: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

Index

Page numbers with the “A:” prefix are entries from the print edition.accelerated relaxation, A:231

Gauss-Seidel method, A:231air resistance, A:58–A:60

linear, A:62quadratic, A:70

animationant raids, 145atomic collision, 228baseball, realistic flight, A:75bouncing ball, A:27dipole radiation field, A:253electric field hockey, A:226falling tableclothmechanical, A:215thermal, A:405

game of life, 138Halley’s comet, 15laser-driven coherent state, 94magnetic field, A:252Mandelbrot fractals, A:172N -body simulator, A:410oscillation of slinky, 52plane electromagnetic wave, A:254

planetary motion

Earth, A:93Mercury, A:98precession of Mercury, A:101

projectile motion, A:57quantum revival, 2D, A:293quantum scattering from a bar-

rier, 79quantum wavepacketin free fall, A:285

in SHO, A:278relaxation of electric potential,

A:229

shooting for eigenenergies, A:314simple harmonic oscillator, A:184soccer, A:80

spin flip, A:400, A:418strange butterfly attractor, A:166thermal equilibrium, A:424

Thomson problem, 67three-body motionchoreography, A:114

collinear, A:113

271

Page 280: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

272 INDEX

Trojan asteroids, A:122wave on a membrane, A:214

ants raiding pattern, 142, 147atomic form factor, 214

elastic, 220atomic reaction, 211–221

antiproton impact, 247capture, 229, 232cross section, 230, 233Thomas mechanism, 234

excitation, 162, 214cross section, 215, 216, 230electron impact, 245

ionization, 162, 218cross section, 218, 230electron impact, 248free-particle model, 220positron impact, 248

atomic structure, 65, see also hy-drogenic atom

atomic units, A:273attractor, A:147, A:160, A:165

dimension, see fractal

band matrix, A:283representation, A:284solver, see SciPy

baseball, A:73animation, A:75curveball, A:76drag coefficient, A:61lift coefficient, A:73

basis expansion method, A:327box basis, A:328half-open space, A:331SHO basis, A:329

bifurcation, A:150, see also chaosbinomial

coefficient, A:414distribution, A:365

Bohr model, 225, 232, A:334Boltzmann distribution, A:391, A:399Bose-Einstein condensation, 170

chemical potential, 171critical temperature, 172scattering length, 206

bound states, A:313central field potentials, A:332double square well, A:318gerade and ungerade, A:327Morse potential, A:331periodic multiple wells, A:320square well, A:318

boundary value problemDirichlet boundary condition,

A:202, A:228, A:322, A:337mixed boundary conditions, 135Neumann boundary condition,

A:202, A:228, A:256Brownian motion, A:276, A:364,

A:367–A:369, A:376simulator, A:380

C/C++, 193, A:11, A:26, A:276catenary, 47celestial mechanics, A:92central field approximation, 165,

A:104, A:332central field motion, A:94centrifugal

force, A:117, A:119, A:132potential, 127, A:95, A:117, A:119,

A:334chaos, A:153

bifurcation, A:151, A:162kicked rotor, 19, 27

Page 281: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

INDEX 273

Lorenz model, A:164Lyapunov exponent, A:155nonlinear driven oscillator, A:157Poincare map, 134, A:160, A:167Poincare surface of section, A:162stadium billiard, 22, 28strange attractor, A:165, A:178time scale, A:159weather system, A:165

chemical potential, 170circle

approximation of π, A:375midpoint drawing algorithm, 71quantum dot, 118

classical scattering, see scatteringclassical trajectory Monte Carlo,

221–236animation, 228microcanonical ensemble, 225,

260straightline approximation, 229,

248coherent state, 88, A:293

measurement problem, A:293comet, 15–18

Halley’s, 15ISON, 15

commutator, 76Coriolis effect, A:117, A:120coupled channel method, 86, 90,

110Crank-Nicolson method, A:282Curie’s law, A:416Cython, 110, 193, A:11, A:276, A:427

debugging, A:13deflection function, see scatteringdensity of states, 171, 258

differentiation operator, A:248, seealso radial basis function

dipole selection rule, 93Dirac δ atom, A:325, A:348

δ comb, 129δ molecule, 128, A:325, A:345cusp, A:325

displacement of a string, A:194drag force, see also air resistance

Brownian motion, A:367coefficient, A:60empirical formula, A:61

quadratic, A:60viscosity, A:59

eccentric anomaly, 260, A:109eccentricity, A:96, A:456Ehrenfest theorem, 96, A:280, A:299eigenvalue problem, A:192, A:211,

A:322, A:328generalized, A:191, A:324, A:337,

A:360Jacobi transformation, A:192

Einstein solid, A:382–A:390energy distribution, A:384entropy, A:387, A:388, A:416interacting systems, A:388, A:417temperature, A:392

electric field hockey, A:226electric potentials and fields, A:228

disk in a box, A:249parallel plates, A:229unit square, A:244

electromagnetic wavesdipole radiation, A:253plane waves, A:254

electrostatic equilibriumon a sphere, 65

Page 282: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

274 INDEX

plum-pudding model, 65electrostatic potential energy, A:369,

A:375energy band, A:321entropy, A:387, A:422

Einstein solid, A:387heat and temperature, A:398Ising model, A:397paramagnetic system, A:416

envelope function, 92, 96equipartition theorem, A:376, A:412error, see numerical errorEuler method, A:29Euler rotation, 261Euler-Cromer method, A:48, A:185,

A:222evolution operator, A:282

approximate, 77exoplanets, A:107–A:111

HD 139357, A:108HD 3651, A:110, A:129modeling RV datasets, A:109radial velocity method, A:107

expectation value, 90, 96, A:279,A:324

F2Py, 110, 191, 193, 247, 266, A:11,A:276, A:427

falling tableclothmechanical, A:215thermal, A:405

fast Fourier transform, 30–44, A:170aliasing, 40iterative FFT, 38Nyquist frequency, 44, A:208Parseval’s relation, A:169positive and negative frequen-

cies, 42

recursive FFT, 34two-dimensional, A:295wave function, A:287

fast multipole method, 193finite difference method

displacement of a string, A:196error, A:245Laplace equation, A:229quantum dot, A:336quantum eigenenergies, A:321standing waves, A:210waves on a membrane, A:212waves on a string, A:206

finite element methodaccuracy, A:245basis functions, A:199, A:233building system matrix, A:239data structure, A:241Dirac δ atom, A:325, A:348Dirac δ molecule, 129displacement of a string, A:199error, A:340FEM library, A:338Laplace equation, A:233mesh file format, A:362mesh generation, A:237, A:241,

A:265, A:338mixed boundary conditions, 135nodes and elements, A:235Schrodinger equation, A:322stiffness matrix, A:203tent functions, A:233, A:325

fitting, A:110fixed-point number, A:15floating point, A:16

bit shifting, A:17byte string, A:16machine accuracy, A:3, A:17

Page 283: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

INDEX 275

mantissa, A:3, A:16phantom bit, A:16round-off error, A:17

football, A:86Fortran, 191, A:11, A:194, A:276Fourier transform, A:169, see also

fast Fourier transformfractals, A:170

Cantor set, A:175correlation dimension, A:172Hausdorff dimension, A:171Julia set, A:176Koch curve, A:171Mandelbrot fractals, A:172Sierpinski carpet, A:176

free fall, A:29animation, A:27Euler’s method, A:30momentum profile, A:289quantum mechanical, A:281, A:285Runge-Kutta methods, A:35

game of life, 137Gauss elimination, 55

Gauss-Jordan method, 56Gauss-Seidel method, 56

Gaussian distribution, 140, A:374golden mean, A:5, A:314

recursion, A:5golf, A:77

drag and lift, A:77Green’s function, 199

Halton sequence, A:249heat capacity, A:391Heisenberg’s uncertainty principle,

44, A:208, A:273, A:289,A:342

Hilda asteroids, see restricted three-body problem

Hooke’s law, A:99, A:188, A:218hydrogen molecule, vibrational states,

A:331hydrogenic atom, A:332–A:336

l degeneracy, A:332angular probability density, A:335Hulthen potential, A:349modified potential, 1/r1+ε, A:332,

A:348radial equation, A:332radial probability density, A:334radial wave function, 106, 256,

A:333screened Coulomb potential, A:348shell structure, A:334

ideal gas law, A:413importance sampling, A:371installation, A:17–A:18integral equation, 198IPython, A:2, A:17, A:19, A:21

IVisual, A:12Matplotlib inline, A:11

Ising model, A:392–A:3991D, A:3922D, A:399–A:404, A:4193D, 179antiferromagnetism, 178, 187,

A:392critical temperature, A:400, A:403energy, A:395, A:400entropy, A:397, A:398, A:416,

A:418, A:419computation, A:418

exact solution in 2D, A:422ferromagnetism, A:392

Page 284: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

276 INDEX

heat capacity, A:397, A:402hysteresis, 181, A:419magnetization, A:395, A:400staggered, 178, A:424

mean field approximation, 177,180, 181, 184

partition function, A:416phase transition, A:400spin domains, A:400toward equilibrium, A:395

IVisual, A:12, A:17

Jupiter, see precession of Mercury,see also restricted three-bodyproblem

pull on the Sun, A:107

Kansa’s method for PDEs, A:248,see also radial basis func-tion

Kepler orbit, A:96, A:108Kepler’s third law, A:98planets, A:96

Kepler’s equation, 260, A:110

Lagrange points, 233, A:119Lambert W function, 129, A:67–

A:70, A:345approximate formulas, 7Bose-Einstein condensation, 171Dirac δ molecule, A:326evaluation, A:68projectile motion, linear drag,

A:69laminar flow, A:60Langevin equation, A:367, A:376Laplace equation, A:228

additivity rule, A:258Laplace operator

nine-point discretization, A:258laser-electron interaction, 92

strong fields, 98leapfrog method, A:43–A:48, A:52

area-preserving, A:44space discretized, A:274time dependent, 261time transformation, A:101N -body system, 222

least square fitting, A:110, A:137Lennard-Jones potential, 132, A:407Levinson theorem, A:441, A:448,

A:451, A:454lift force, A:72, see also Magnus

forcelinear combination of atomic or-

bitals (LCAO), 129, A:320,A:327

linear interpolation, A:85, A:347Lippmann-Schwinger equation, 200logistic map, A:144–A:153

bifurcation, A:151Feigenbaum number, A:151, A:152fixed points, A:147Lyapunov exponent, A:156period doubling, A:150renormalization, 28

Lorenz flow, A:163, see also chaosLyapunov exponent, A:155, see also

chaos

magnetic field, A:251closed loop, A:251long wire, A:252

magnetization, A:395, A:400, A:416Magnus force, A:72

lift coefficient, A:73Matlab, A:11

Page 285: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

INDEX 277

Matplotlib, A:11, A:202D plots, plot, A:8, A:20–A:21,

A:313D plotsAxes3D, plot surface, A:142,A:263, A:266, A:270

scatter, 60, A:178animation, 148, A:307, A:380,

A:424aspect ratio, A:142, A:270axislabel, A:8limit, A:138, A:182off, A:268semilog scale, A:179width, A:21

bitmap images, imshow, 150,A:183, A:268, A:311, A:424

color, A:20colorbar, A:268configuration records, A:21contourfilled, contourf, 96, A:454lines, contour, A:142, A:263,A:266

error bar, A:138, A:385font size, A:21frameoff, A:268spacing, A:21

histogram, 136, A:425IPython inline, A:11legend, 114, A:198, A:223frame off, 136

linestyle, A:20width, A:21, A:142

marker, A:20

color, 60multiple plots, subplot, A:181,

A:182, A:311

polar plot, A:452pylab, A:11

step plot, step, 117

text label, A:20

LATEX math mode, A:142

tick marks, A:311, A:461

triangular mesh plot, tripcolor,A:259, A:266, A:360

triangular mesh surface, plot trisurf,A:360

vector fields, quiver, A:142,A:263, A:269, A:379

Maxwell distributions, A:410

mean free path, 161

energy dependent, 183

Mercury, see precession of Mercury

meshfree method, A:247, see also

radial basis function

MeshPy, A:266, A:338

Metropolis algorithm, A:393–A:394,A:398–A:399

model building, A:60

molecular dynamics, 222, A:406–A:410

close-neighbor interaction, A:409

equipartition theorem, A:412

ideal gas law, A:413

initial condition, A:410, A:427Maxwell distributions, A:412,

A:420

optimization, 191, A:427

periodic boundary condition, A:408–A:410

pressure, A:412

Page 286: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

278 INDEX

second virial coefficient, A:413,A:421

units, A:407Monte Carlo integration, A:307, A:369,

A:375error, A:371hit-or-miss method, A:375

Monte Carlo simulationants, 145Einstein solid, A:382nuclear decay, A:364particle transport, 160simulated annealing, A:404falling tablecloth, A:405hanging chain, 153hanging tablecloth, A:419traveling salesman problem,157, 181

Morse potential, 131, A:331, A:407

Navier-Stokes equations, see Lorenzflow

Newtonsecond law, A:27third law, A:72

nuclear decay, A:363Numba, 110, 193, A:11, A:183, A:276numerical differentiation

first order, A:29midpoint method, A:33second order, A:196

numerical error, A:3global error, A:31in energy, A:47round-off, A:4truncation, A:4

numerical integration, A:304, see

also Monte Carlo integra-

tionGaussian, 107, A:306abscissa and weight, A:312

multiple integral, A:307Simpson’s rule, A:305trapezoid rule, A:304

Numerov’s method, A:320, A:349first derivative, A:350logarithmic scale, 127

NumPy, A:10, A:21–A:26advanced indexing, A:23row or column swap, A:25,A:264, A:362

with boolean array, A:24, A:138,A:142, A:410, A:427

with integer array, A:23, A:265argmin, A:138array creation, A:21broadcasting, A:22, A:76, A:225,

A:410, A:427concatenate, 43, A:138, A:266,

A:289, A:354, A:361conversion to list, A:362copy, A:23data type, A:21, A:284diagonal, A:198, A:223, A:355,

A:356dot product, A:103, A:266, A:355element insertion or deletion,

A:22, A:198, A:223, A:265,A:271, A:356, A:360

element-wise operations, 58, 60,A:22, A:25, A:138, A:209,A:212–A:214, A:225, A:229,A:268, A:277, A:311, A:410,A:420, A:427

F2Py, A:11FFT, 86

Page 287: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

INDEX 279

in 2D, A:295flatten, A:265gradient, A:142, A:263histogram, 189, A:420linspace, A:137, A:141matrix multiplication, A:355maximum element, 58meshgrid, A:141nearest difference, A:138outer method, A:138outer product, 58, A:311random distribution, A:363, A:380reshape, A:264–A:266row and column convention, A:142,

A:263row and column insertion or

deletion, 130, A:25, A:266shape, A:24slicing, 60, A:22, A:142, A:209,

A:212–A:214, A:225, A:229,A:263, A:265, A:284, A:352

sorting, A:266stacking, 60column, A:268, A:271, A:361depth, A:224, A:264

summing arrays, A:224, A:301,A:325, A:410, A:420, A:427

take, A:266transpose, A:263truth array, A:23, A:138, A:142,

A:264, A:362universal functions (ufunc), A:10,

A:25, A:26, A:141, A:325vector operations, A:75vectorizing functions, A:26, A:325,

A:352

object-oriented programming, A:9,

A:380, A:383orbiting, 195, see also scatteringordinary differential equation, A:27,

see also Euler, leapfrog, Nu-merov, and Runge-Kuttamethods

implicit method, A:43oscillation

damping, A:185resonance, A:187RLC circuit, A:185

paramagnetic system, A:390, A:415,A:417

partial differential equation, A:184,see also Laplace, Schrodinger,and wave equations

particle transport, 160angular scattering, 182energy deposition, 161, 169energy-dependent mean free path,

183range distribution, 164, 168

partition function, A:391harmonic oscillator, A:392

phase transition, A:400Ising model, A:400

Ping-Pong, A:78spin effects, A:79

planetary motion, A:92–A:99open orbits, A:98properties, A:94simulation, A:93units, A:97

Pluto, see restricted three-body prob-lem

Poincare map, see chaos

Page 288: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

280 INDEX

Poisson distribution, 122, A:364,A:377

Poisson equation, A:228power spectrum, A:168, A:189precession of Mercury, A:99–A:107

by other planets, A:104oscillations, A:104relativistic, A:100, A:103scaling law, A:105

probability density, A:281profiling, 109, A:260, A:277program profiling, 85programs list, A:463projectile motion, A:57

linear drag, A:62quadratic drag, A:70visualizing, A:57

pseudospectral method, 78Python, A:7

2.7x vs. 3.xx compatibility, A:9assignment and type, A:8complex number, A:182, A:307conditional, A:8deep copy, 148eval function, A:137exception, A:360file I/O, A:9, A:137formatting string, A:136, A:145global variables, A:9, A:36, A:318IDLE, A:20indentation, A:8inline if, A:9, A:138, A:145input, A:8installation, A:17lambda function, A:325, A:361list, A:8append, A:8concatenate, A:267, A:384

count, A:425delete element, 148nested, A:265slicing, A:23sorting, A:266vs. ndarray, A:23, A:384

online help, A:10, A:21operator overloading, A:384pickle file handler, A:360profiling, 109random integer, A:384random number, A:362speed boost, A:11, A:183, A:216with-as statement, A:137

quantum chaos, 117–125chaoticity, 125energy level statistics, 121nearest neighbor spacing, 121histogram, 124

scars, 125spectrum unfolding, 122stadium billiard, 117

quantum dot, A:336–A:344circle, 128, A:349degeneracy, A:342energy level distribution, 117hexagon, A:341isosceles right triangle, A:338stadium, 118triangle, A:346wave function, 119, 125, A:340,

A:343quantum mechanics, see Schrodinger

equationquantum quilt, A:302quantum revival, A:295, A:297

revival time, A:297

Page 289: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

INDEX 281

semiclassical limit, A:298quantum scattering, A:437

T -matrix, 212amplitude, 200, 211, A:438, A:442atomic form factor, 214Born approximation, 201, 206,

209, 213, A:448, A:450Buckingham potential, A:455cross section, 212, A:442, A:444,

A:449elastic, 245Fermi’s golden rule, 201Gaussian potential, A:455hard sphere, A:443shadow effect, A:446

Hulthen potential, A:455inelastic, 211optical theorem, 238partial wave expansion, A:441–

A:443phase shift, 206, 208, 209, A:439,

A:442, A:447, A:448potential barrier, 79potential well, 83Ramsauer-Townsend effect, 205,

A:455resonance, 84, A:440scattering length, 203, 242square spherical barrier, A:447,

A:451square spherical well, A:451WKB approximation, 207, 209Yukawa potential, 166, 201, 209,

A:446quantum transitions, 86

amplitudes, 88dipole allowed, 98, 216dipole forbidden, 98, 216

in hydrogen, 105in the SHO, 104laser driven, 91multiphoton transition, 98occupation probability, 88, 96,

A:290Rabi flopping, A:291two-photon transition, 98two-state system, A:289

Rabi flopping, A:291in hydrogen, 106Rabi frequency, A:292rotating wave approximation,

A:292radial basis function, A:247

collocation method, 71, A:248differentiation operator, A:249,

A:250Gaussian and multiquadric RBF,

A:247scattered data interpolation, A:247shape parameter, A:247, A:250

radial velocity method, see exo-planets

radial velocity transformation, A:133random number, A:362

correlation and moment tests,A:362

integer, A:384nonuniform distribution, A:374,

A:377Lorentzian, A:374rejection method, A:378transform method, A:377

seed, A:362uniform range, A:374

random walk, A:364

Page 290: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

282 INDEX

binomial distribution, A:365in 2D, A:366

recursionstability, 238, A:6, A:207

reflection coefficient, 81, 83restricted three-body problem, A:116–

A:121Earth-Moon system, A:119Hilda asteroids, A:123, A:130Lagrange points, A:119orbital resonance, A:123Pluto libration, 14, A:123Pluto’s motion, A:123Sun-Jupiter system, A:121Sun-Neptune system, A:123units, A:117

Reynolds number, A:60root finding, A:64

bisection, A:64, A:87, A:318false position, 6Newton’s method, A:65, A:89,

A:176SciPy equation solver, A:64, A:83,

A:344secant method, 5

rotating frame, A:130rotation matrix, A:132round-off error, see numerical er-

rorRunge-Kutta methods, A:32

characteristic time, A:42non-vectorized, A:42SciPy wrapper, A:55step size control, A:42

Runge-Kutta-Fehlberg method, A:42,A:350

Runge-Lenz vector, A:100, A:103Rutherford scattering, 66, A:431

cross section, A:432Rydberg states, 236, A:334

scattering, A:428, see also quan-tum scattering

cross section, 253, A:430deflection function, 208, A:428,

A:456Yukawa potential, A:436

glory, A:436, A:449impact parameter, 197, A:428orbiting, 195, 240plum potential, A:433, A:460rainbow, A:433, A:435, A:436,

A:449, A:450, A:452Snell’s law, A:450, A:453square spherical barrier, A:453square spherical well, A:452Yukawa potential, A:436

scattering length, 203, see also quan-tum scattering

Schrodinger equation, time depen-dent, A:272, see also wavepacket

average position, A:279, A:298boundary effects, A:286coupled channel method, 86direct simulation, A:274periodic boundary condition, A:277split evolution operator, 78split-operator method, A:283

Schrodinger equation, time inde-pendent, A:313, see also boundstates

animated eigenstates, A:314basis expansion method, A:327discrete energies, A:212, A:315integral equation, 199matching condition, A:316

Page 291: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

INDEX 283

pseudo-continuum states, A:322,A:329

shooting methods, A:315Schrodinger’s cat, A:293SciPy, A:10

Airy function and its zeros, A:351,A:356

band eigenvalue solver, A:349,A:355, A:360

band matrix solver solve banded,A:283

Bessel functionspherical, A:447zeros, 128

BLAS and LAPACK, A:194combination, A:416, A:417eigenvalue solver eigh, A:194,

A:223, A:355elliptic integral, A:423gamma function, A:415Hermite polynomial, 131, A:356integration, 243, A:312Lambert W function, A:68least square fitting, A:110, A:137Legendre polynomial, A:461linear system solver solve, A:197,

A:198, A:266ODE solvers, A:27, A:42, A:55orthogonal polynomial, A:312root solver fsolve, A:64, A:83,

A:344sparse eigenvalue solver eigsh,

A:322, A:325, A:355–A:356,A:360

Weave, A:11self-consistent methods, 48, A:228

relaxation error, A:232shooting methods, A:80, A:315, A:318

simple harmonic oscillatoranimation, A:184classical, A:45quantum mechanical, A:278

Simpson’s rule, A:280Snell’s law, see scatteringsnub cube, 69soccer, A:79space discretized leapfrog method,

A:274, A:276normalization error, A:281stability, A:276

sparse matrix, A:243special function

Airy function, 130, A:330, A:346,A:350

zeros, A:351Bessel function, 128, A:342modified spherical, A:451recurrence, A:457spherical, 251, A:441, A:447,A:457

zeros, 128elliptic integral, A:402, A:423Hermite polynomial, 131, A:329Laguerre polynomial, 127Lambert W function, A:326Legendre polynomial, 108, A:306in plane waves, A:441

spectral staircase, 115spherical harmonics, 105, A:332

addition, 251in plane waves, 251orthogonality, 251

spinning balls, A:72spin parameter, A:73

split evolution operator, 75, 78

Page 292: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

284 INDEX

split-operator method, first order,A:282

error, A:286stiff differential equation, A:43Stirling’s approximation, A:415Stokes’ law, A:59stopping power, 162symplectic methods, A:47, see also

leapfrog methodfirst order, A:48

SymPy, A:11, A:17, A:20, A:217factorization, A:175integrate, A:49, A:450Lambert W function, A:68physicshydrogen atom, 106quantum oscillator, A:329

series, A:126solve, A:84, A:344, A:345

table tennis, A:78, see also Ping-Pong

temperatureCurie, A:402Einstein solid, A:390, A:392negative, A:390, A:418

thermodynamics, A:382second law, A:388third law, A:397

Thomson model, 65three-body problem, A:111–A:116

choreography, A:114dynamics, 228, see also classi-

cal trajectory Monte CarloEuler’s collinear motion, A:111Euler’s quintic equation, A:113planar motion, A:111

traffic flow, 139

fundamental diagram, 140hybrid model, 146

transmission coefficient, 81, 83, 103tridiagonal matrix, see band ma-

trixTrojan asteroids, A:121truncation error, see numerical er-

rortunneling, 82, 234turbulent flow, A:60

unitarity, 76

Verlet, see leapfrog methodvibration, A:188

normal modes, A:190string, A:204triatomic molecules, A:188

virial theorem, A:334, A:349viscosity, A:59–A:60

air, A:83visualization, A:2, A:27, A:75, A:253von Neumann stability, A:207VPython, A:12

arrow, A:90, A:136, A:140, A:269axis flip, A:271

box, A:12, A:28, A:90camera angle, A:13curve, A:90faces, A:258helix, A:185in GlowScript, A:20IVisual, A:12key detection, A:90, A:127label, A:90, A:126, A:136, A:262,

A:311box, A:271

light source, A:94

Page 293: To my Father and Mother, who taught me that knowledge is ... · a3-0.0 4785 3103 2.730 6731 0.00 5498 1613 -0.0 5003 8980 a4 0.00 3355 7735 0.1057 7423 0.000 4245 7974 -0.00 3566

INDEX 285

make trail, A:94, A:261making movies, A:20opacity, 67, A:90rate requirement, A:12retain, A:58ring, A:143rotate, A:13, A:90sphere, A:28vector operations, A:75, A:136,

A:268, A:270VIDLE, A:20

VPython modules (VPM), 52, 61–64, A:9, A:94, A:209, A:224,A:225, A:258, A:308, A:311

wave function, A:272laser driven, 96momentum space, A:287normalization, A:281conservation, A:276

plane wave, A:273scarring, 125scattering, A:444

wavepacket, A:278, A:285broadening, A:285in 2D, A:294momentum distribution, A:287,

A:295optical diffraction, A:296refocusing, A:278scattering from a barrier, 79scattering from a well, 85self interference, A:286

waves, A:204–A:210on a membrane, A:212standing, 85, A:206, A:209, A:210traveling, A:205, A:209, A:220wave equation, A:205, A:212

Weave, 110, 193, A:11, A:276, A:427Weyl formula, 116Wigner distribution, 122WKB approximation, 207, see also

quantum scattering

Yukawa potential, 201, A:348, A:436,A:446