17
EPAC08 - Genova Participants > 1300 The last EPAC → IPAC (Kyoto IPAC10) Next PAC09 in Vancouver Three-year cycle: Asia, Europe, North America + PAC North America in odd years (2011: Valencia & NY) 3 ILC talks Akira Yamamoto: Co-ordinated Global R&D Effort for the ILC Linac Technology James Clarke: Design of the Positron Source for the ILC Toshiaki Tauchi: The ILC Beam Delivery System Design and R&D Programme

EPAC08 - Genova Participants > 1300 The last EPAC → IPAC (Kyoto IPAC10) Next PAC09 in Vancouver Three-year cycle: Asia, Europe, North America + PAC North

Embed Size (px)

Citation preview

  • EPAC08 - GenovaParticipants > 1300 The last EPAC IPAC (Kyoto IPAC10) Next PAC09 in VancouverThree-year cycle: Asia, Europe, North America + PAC North America in odd years(2011: Valencia & NY) 3 ILC talksAkira Yamamoto: Co-ordinated Global R&D Effort for the ILC Linac TechnologyJames Clarke: Design of the Positron Source for the ILCToshiaki Tauchi: The ILC Beam Delivery System Design and R&D Programme

  • Advanced Computing Tools and Models for Accelerator Physics

    We cannot foresee what this kind of creativity in physics will bring Robert D. RyneLawrence Berkeley National Laboratory

    June 26, 2008Genoa, Italy

  • Overview on High Performance Computingfor Accelerator PhysicsSciDAC (2001-06) (DOE program: Scientific Discovery through Advanced Computing)AST(Accelerator Science and Technology)SciDAC2 (2007-11)COMPASS: The Community Petascale Project for Accelerator Science and Simulation Results shown mainly from first SciDAC programNational Energy Research Scientific Computing Center (NERSC), Berkley, e.g. Franklin, Seaborg (decom. Jan 08) 6080 CPUs ATLAS cluster at LLNL (~1000 nodes Linux cluster)

  • Overview on High Performance Computingfor Accelerator PhysicsSciDAC (2001-06) (DOE program: Scientific Discovery through Advanced Computing)AST(Accelerator Science and Technology)SciDAC2 (2007-11)COMPASS: The Community Petascale Project for Accelerator Science and Simulation Results shown mainly from first SciDAC programNational Energy Research Scientific Computing Center (NERSC), Berkley, e.g. Franklin, Seaborg (decom. Jan 08) 6080 CPUs ATLAS cluster at LLNL (~1000 nodes Linux cluster)

    The new buzzword

  • Two weeks ago, petaflop announcement:

    IBM roadrunner100 million times performance compared with computers at the time of the 1971 High Energy Accelerator Conference!

  • Two weeks ago, petaflop announcement:

    IBM roadrunner100 million times performance compared with computers at the time of the 1971 High Energy Accelerator Conference! 6480 AMD-Dual-Core-Opteron+ 1 Cell Processor per core Playstation3Los Alamos N L

  • 1 teraflop4 GB1.4 billion transistors240 cores$1700GPUs gaining popularityIts called TESLA and runs at 1.3 GHzGraphics Processing UnitNVDIA Tesla C1060 PCIe-cardPresented June 2008Seaborg

  • What to do with all that computing power?Beam dynamicsMultiparticle interaction Beams in plasmaComponent design (e.g. Cavities)Codes e.g.IMPACTBeamBeam3d (Tevatron beam-beam)T3P/Omega3p (time, freq. domain solveHPC, parallelisation Collaborative effort allows to combine codes and to define interfaces

  • TransportMaryLieDragt-FinnMADPARMILA2D space chargePARMELAPARMTEQIMPACT-ZIMPACT-TML/ISynergiaORBITBeamBeam3DFreq mapsMXYZPTLKCOSY-INFrms eqnsNormal FormsSymp IntegDAGCPIC3D space chargeWARPSIMPSONSIMPACTMAD-X/PTCPartial list only;Many codes not shownParallelization beginsSINGLE PARTICLE OPTICS1D, 2D COLLECTIVE3D COLLECTIVE SELF-CONSISTENTMULTI-PHYSICS1970198019902000Examples->

  • Modeling FERMI@Elettra Linac with IMPACT-ZUsing 1 Billion Macroparticles 100MeV1.2GeVJi Qiang, LBNL

  • Accurate prediction of uncorrelated energy spread in a linac for a future light sourceJi QiangFinal longitudinal phase space from IMPACT-Z simulation using 10M and 1B particles

  • Final Uncorrelated Energy Spread versus # of Macroparticles: 10M, 100M, 1B, 5BJi QiangM. VenuturiniIMPACT-Z resultsMicrobunching instability gain function

  • BeamBeam3D simulation and visualization of beam-beam interaction at Tevatron 400 times usual intensityEric Stern et al., FNAL

  • 1.75 M quadratic elements, 10 M DOFs, 47 min per nsec on Seaborg 1024 CPUs with 173 GB memory CG and incomplete Cholesky preconditioner Simulations of chains of cavities Full cryomoduleSorry, could not get the movies1 hour CPU time, 1024 processors, 300 GB memory at NERSC

  • Cavity Coupler Kicks(Wakefield and RF)6 posters Studies for ILC (main linac/RTML), FLASH2 numerical calculations for RF kickM. Dohlus (Mafia)V.Yakolev (HFSS)

    MOPP042, N.Solyak et al.(Andrea Latina, PLACET)TUPP047, D. Kruecker et al.(MERLIN)~30% different

  • Wakefields in periodic structures#cavitiesM.Dohlus et al. MOPP013402 CPUs,7days1CPUWithout error estimateThe discussions on wakefield kicks started at 20V/nCThe effect becomes smaller

  • FLASHSimulation vs. Measurement0.6 nCBPM11DBC2OTR screensCoupler wakefield calculationsFrom I. ZogorodnovM. Dohlus E. Prat et al. TUPP018 (ELEGANT)

    blub