33
Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Embed Size (px)

Citation preview

Page 1: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Very Large Scale Computing In Accelerator Physics

Robert D. RyneLos Alamos National Laboratory

Page 2: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 2

…with contributions from members of

Grand Challenge in Computational Accelerator Physics

Advanced Computing for 21st Century Accelerator Science and Technology project

Page 3: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 3

Outline

Importance of Accelerators Future of Accelerators Importance of Accelerator Simulation Past Accomplishments:

Grand Challenge in Computational Accelerator Physics– electromagnetics

– beam dynamics

– applications beyond accelerator physics

Future Plans Advanced Computing for 21st Century Accelerator S&T

Page 4: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 4

Accelerators have enabled some of the greatest discoveries of the 20th century

“Extraordinary tools for extraordinary science” high energy physics nuclear physics materials science biological science

Page 5: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 5

Accelerator Technology BenefitsScience, Technology, and Society

electron microscopy beam lithography ion implantation accelerator mass spectrometry medical isotope production medical irradiation therapy

Page 6: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 6

Accelerators have been proposed to address issues of international importance

Accelerator transmutation of waste Accelerator production of tritium Accelerators for proton radiography Accelerator-driven energy production

Accelerators are key tools for solving problems related to energy, national security, and quality of the environment

Page 7: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 7

Future of Accelerators: Two Questions

What will be the next major machine beyond LHC? linear collider -factory/ -collider rare isotope accelerator 4th generation light source

Can we develop a new path to the high-energy frontier? Plasma/Laser systems may hold the key

Page 8: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Example: Comparison of Stanford Linear

Collider and Next Linear Collider

Page 9: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Possible Layout of a Neutrino Factory

Page 10: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 10

Importance of Accelerator Simulation

Next generation of accelerators will involve: higher intensity, higher energy greater complexity increased collective effects

Large-scale simulations essential for design decisions & feasibility studies:

– evaluate/reduce risk, reduce cost, optimize performance

accelerator science and technology advancement

Page 11: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 11

Cost Impacts

Without large-scale simulation: cost escalation SSC: 1 cm increase in aperture due to lack of

confidence in design resulted in $1B cost increase

With large-scale simulation: cost savings NLC: Large-scale electromagnetic simulations

have led to $100M cost reduction

Page 12: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 12

DOE Grand Challenge In Computational Accelerator Physics (1997-2000)

Goal - “to develop a new generation of accelerator modeling tools on High Performance Computing (HPC) platforms and to apply them to present and future accelerator applications of national importance.”

Beam Dynamics:LANL (S. Habib, J. Qiang, R. Ryne)UCLA (V. Decyk)

Electromagnetics:SLAC (N. Folwell, Z. Li, V. Ivanov, K. Ko, J. Malone, B. McCandless, C.-K. Ng, R. Richardson, G. Schussman, M. Wolf)Stanford/SCCM (T. Afzal, B. Chan, G. Golub, W. Mi, Y. Sun, R. Yu)

Computer Science & Computing Resources - NERSC & ACL

Page 13: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 13

New parallel applications codes have been applied to several major accelerator projects

Main deliverables: 4 parallel applications codes Electromagnetics:

3D parallel eigenmode code Omega3P 3D parallel time-domain EM code Tau3P

Beam Dynamics: 3D parallel Poisson/Vlasov code, IMPACT 3D parallel Fokker/Planck code, LANGEVIN3D

Applied to SNS, NLC, PEP-II, APT, ALS, CERN/SPL

New capability has enabled simulations 3-4 orders of magnitude greater than previously possible

Page 14: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 14

Parallel Electromagnetic Field Solvers: Features

C++ implementation w/ MPI Reuse of existing parallel libraries (ParMetis, AZTEC) Unstructured grids for conformal meshes New solvers for fast convergence and scalability Adaptive refinement to improve accuracy & performance Omega3P: 3D finite element w/ linear & quadratic basis

functions Tau3P: unstructured Yee grid

Page 15: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 15

Why is Large-Scale Modeling Needed? Example: NLC Rounded Damped Detuned Structure (RDDS) Design

highly three-dimensional structure detuning+damping manifold for wakefield

suppression require 0.01% accuracy in accelerating

frequency to maintain efficiency simulation mesh size close to fabrication

tolerance (order of microns) available 3D codes on desktop computers

cannot deliver required accuracy, resolution

Page 16: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 16

11.424

11.4242

11.4244

11.4246

11.4248

11.425

11.4252

0 0.02 0.04 0.06 0.08 0.1 0.12 0.14

Fre

qu

ency

(G

Hz)

2.9 M D.O.F.

1.8 M D.O.F.

0.38 M D.O.F.

NLC - RDDS Cell Design (Omega3P)

Accelerating Mode

Frequency accuracy to 1 part in 10,000 is achieved

1 MHz

h4

Page 17: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 17

+0.41 MHz

+13.39 MHz

+4.86 MHz+1.05 MHz

+0.42 MHz

+0.35 MHz

+0.23 MHz

+1.12 MHz

+2.60 MHz

-2.96 MHz

+0.42 MHz

+0.55 MHz

+0.52 MHz

+0.41 MHz

+0.14 MHz

NLC - RDDS 6 Cell Section (Omega3P)

Page 18: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 18

NLC - RDDS Output End (Tau3P)

Page 19: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 19

PEP II, SNS, and APT Cavity Design (Omega3P)

Page 20: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 20

refined mesh size: 5 mm 2.5 mm 1.5mm # elements : 23390 43555 106699 degrees of freedom: 142914 262162 642759 peak power density: 1.2811 MW/m2 1.3909 MW/m2 1.3959 MW/m2

Peak Wall Loss in PEP-II Waveguide-Damped RF cavity

Omega3P - Mesh Refinement

Page 21: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 21

Parallel Beam Dynamics Codes: Features

split-operator-based 3D parallel particle-in-cell canonical variables variety of implementations (F90/MPI, C++, POOMA, HPF) particle manager, field manager, dynamic load balancing 6 types of boundary conditions for field solvers:

open/circular/rectangular transverse; open/periodic longitudinal

reference trajectory + transfer maps computed “on the fly” philosophy:

do not take tiny steps to push particles do take tiny steps to compute maps; then push particles w/ maps

LANGEVIN3D: self-consistent damping/diffusion coefficients

Page 22: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 22

Why is Large-Scale Modeling Needed? Example: Modeling Beam Halo in High Intensity Linacs

Future high-intensity machines will have to operate with ultra-low losses

A major source of loss: low density, large amplitude halo

Large scale simulations (~100M particles) needed to predict halo

Maximum beam size does not converge in small-scale PC

simulation (up to 1M particles)

0 100 200 300 400 500 600 700 800 900 1000 1100

Ws (MeV)

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

1.6

Bea

m S

ize

(cm

)

SNS CCDTL/CCLWith Errors, No Mismatch

1,000,000 100,00010,0001,000

Maximum Extent

RMS

Page 23: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 23

Mismatched Induced Beam Halo

Matched beam.x-y cross-section

Mismatched beam.x-y cross-section

Page 24: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 24

Vlasov Code or PIC code?

Direct Vlasov: bad: very large memory bad: subgrid scale effects good: no sampling noise good: no collisionality

Particle-based: good: low memory good: subgrid resolution OK bad: statistical fluctuations bad: numerical collisionality

Page 25: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 25

How to turn any magnetic optics code into a tracking code with space charge

Split-Operator Methods

H=Hext H=Hsc

M=MextM=Msc

H=Hext+Hsc

M(t)= Mext(t/2) Msc(t) Mext(t/2) + O(t3)

MagneticOptics

Multi-ParticleSimulation

(arbitrary order possible via Yoshida)

Page 26: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 26

Development of IMPACT has Enabled the Largest, Most Detailed Linac Simulations ever Performed

Model of SNS linac used 400 accelerating structuresSimulations run w/ up to 800M particles on a 5123 gridApproaching real-world # of particles (900M for SNS)

100M particle runs now routine (5-10 hrs on 256 PEs)Analogous 1M particle simulation using legacy 2D

code on a PC requires weekend 3 order-of-magnitude increase in simulation capability

100x larger simulations performed in 1/10 the time

Page 27: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 27

Comparison: Old vs. New Capability

1980s: 10K particle, 2D serial simulations typical Early 1990s: 10K-100K particle, 2D serial simulations typical 2000: 100M particle runs routine (5-10 hrs on 256 PEs); more

realistic treatment of beamline elements

SNS linac; 500M particlesLEDA halo expt; 100M particles

Page 28: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 28

Intense Beams in Circular Accelerators

Previous work emphasized high intensity linear accelerators

New work treats intense beams in bending magnets

Issue: vast majority of accelerator codes use arc length (“z” or “s”) as the independent variable.

Simulation of intense beams requires solving 2= at fixed time

The split-operator approach treated in linear and circular systems will soon make it possible to “flip a switch” to turn

space charge on/off in the major accelerator codes

x-z plot based on x- data from an s-codeplotted at 8 different times

Page 29: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 29

Collaboration/impact beyond accelerator physics

Modeling collisions in plasmas new Fokker/Planck code

Modeling astrophysical systems starting w/ IMPACT, developing astrophysical PIC code also a testbed for testing scripting ideas

Modeling stochastic dynamical systems new leap-frog integrator for systems w/ multiplicative noise

Simulations requiring solution of large eigensystems new eigensolver developed by SLAC/NMG & Stanford SCCM

Modeling quantum systems Spectral and DeRaedt-style codes to solve the Schrodinger,

density matrix, and Wigner-function equations

Page 30: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 30

First-Ever Self-Consistent Fokker/Planck

Self-consistent Langevin-Fokker/Planck requires the analog of thousands of space charge calculations per time step “…clearly such calculations are impossible….” NOT! DEMONSTRATED, thanks to modern parallel machines and intelligent

algorithms

Diffusion Coefficients Friction Coefficient / velocity

Page 31: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 31

Schrodinger Solver: Two Approaches

).0,(),(2

22

2

2)

2(

2)

2(

xeeex mVm

.2

22

tiV

m

.2

V

i

dx

d

dx

d

m

iH

N

jjjj

N

j

jjjj Vi

m

iH

112

11 ,))((

2

Spectral:

Field Theoretic:

Discrete:

FFTs; global communication

Nearest-neighbor communication

Page 32: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 32

Conclusion“Advanced Computing for 21st Century Accelerator Sci. & Tech.”

Builds on foundation laid by Accelerator Grand Challenge Larger collaboration:

presently LANL, SLAC, FNAL, LBNL, BNL, JLab, Stanford, UCLA

Project Goal: develop a comprehensive, coherent accelerator simulation environment

Focus Areas: Beam Systems Simulation, Electromagnetic Systems Simulation,

Beam/Electromagnetic Systems Integration

View toward near-term impact on: NLC, -factory (driver, muon cooling), laser/plasma accelerators

Page 33: Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne 33

Acknowledgement

Work supported by the DOE Office of Science Office of Advanced Scientific Computing Research, Division

of Mathematical, Information, and Computational Sciences Office of High Energy and Nuclear Physics Division of High Energy Physics, Los Alamos Accelerator

Code Group