41
Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

Embed Size (px)

Citation preview

Page 1: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

Event selection and readoutOnline networks and architecturesOnline event filterTechnologies and trends

Computing and communication at LHCComputing and communication at LHC

Page 2: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 2

Two superconducting magnet rings in the LEP tunnel.

Opal

Delphi

SPS

PS

LEP - LHC

Aleph

L3

LHCb

Alice

CMS

ATLASExperiments at LHCATLAS A Toroidal LHC ApparatuS. (Study of Proton-Proton collisions)

CMS Compact Muon Solenoid. (Study of Proton-Proton collisions)

ALICE A Large Ion Collider Experiment. (Study of Ion-Ion collisions)

LHCb (Study of CP violation in B-meson decays at the LHC collider)

The Large Hadron Collider (LHC)The Large Hadron Collider (LHC)

Page 3: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 3

• N (channels) ≈ O(107) → need huge number of connections• 20 25 interactions every ns → need information superhighway

• Calorimeter information should correspond to tracking information → 25 need to synchronize detector elements to ns

• : > 25 In some cases Detector signal ns → integrate more than one bunch crossing's worth of information

• : > 25 In some cases Time Of Flight ns → ...need to identify bunch crossing

• ≈ 100 Can store data at Hz → need to reject most interactions

• - ( )It's On Line cannot go back and recover events → need to monitor selection

The LHC challenges: SummaryThe LHC challenges: Summary

Page 4: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 4

Measurement and event selectionMeasurement and event selection

- Event selection (recognize nature of physical process)Identify particles (electron, muon, quarks) and energies

- Event data measurement (x,y,z, E):Sensor signal digitizers : (pulse high, time delay, bit pattern)

- Synchronize all components with machine collisions (40 MHz)

- Read and process at 40 MHz (a new event data every 25 ns)

Charge Time Pattern

Page 5: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 5

The trigger is a function of :

Since the detector data are not all promptly available and the function is highly complex, T(...) is evaluated by successive approximations called :

TRIGGER LEVELS(possibly with zero dead time)

Event data & Apparatus Physics channels & Parameters

T( ) ACCEPTED

REJECTED

Mandate:"Look at (almost) all bunch crossings, select most interesting ones, collect all detector information and store it for off-line analysis" P.S. For a reasonable amount of CHF

Event selection: The trigger systemEvent selection: The trigger system

Page 6: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 6

10-7 s

10-3 s

10-6 s

10-0 s

Collision rate 109 HzChannel data sampling at 40 MHz

Level-1 selected events 105 HzParticle identification (High p

T e, µ, jets, missing E

T)

• Local pattern recognition• Energy evaluation on prompt macro-granular information

Level-2 selected events 103 HzClean particle signature (Z, W, ..) • Finer granularity precise measurement• Kinematics. effective mass cuts and event topology• Track reconstruction and detector matching

Level-3 events to tape 10..100 HzPhysics process identification• Event reconstruction and analysis

Trigger levels at LHC (the first second)Trigger levels at LHC (the first second)

Page 7: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 7

µ

e

n

p

ν

γ

(Use prompt data calorimetry ) :and muons to identify High pt , , ,electron muon jets

missing ET

CALORIMETERs Cluster finding and energy

deposition evaluation

MUON System Segment and track finding

φ η

25 New data every ns ~ Decision latency µs

Particle identificationParticle identification

Page 8: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 8

CMS Level-1 : calorimeters and muonsCMS Level-1 : calorimeters and muons

Pattern recognition much easier on calo & muon:

Compare to Central tracking at L = 1034

(50 ns integration, ≈1000 tracks)

≈7 m

12.5 cm

Algorithm Complexity + huge amount of data

Page 9: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 9

Level-1 trigger systemsLevel-1 trigger systemsElectromagnetic

72 φ x 54 η 2x = 7776 towersHit

0.087 φ 0.087 η

- E H Tower0.017 φ

0.017 η

1) Trigger Primitive Generator Fine Grain peak finding

(3888 116640 )Logic elements x crystal data

φ

η

ET

Hadron

Trigger tower 5 5 x crystals

2) Pixel Processors (3888 34992 )logic elements x pixel data

+ Max ( ) > Threshold

/ < 0.05

< 2 GeV

Et cut

Longitudinal cut (H/E)

Neighbors longitudinal cut

AND

AND

/

Pixel Processor

AND

One of ( , , ,

ISOLATED ELECTRON

) < 1 GeV

Fine grain Flag Max of ( , , , , )

Trigger Primitive Generator

Pt = 3.5, 4.0, 4.5, 6.0 GeV

Trigger based on tracks in external muon detectors that point to interaction region • Low-p

T muon tracks don’t point to vertex

- Multiple scattering- Magnetic deflection

• Two detector layers - Coincidence in “road”

Detectors: RPC (pattern recognition)DT(track segment)

Drift Tubes (DT) Cathod Strip Chambers (CSC)

Page 10: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 10

Level-1 trigger summaryLevel-1 trigger summary

• Time needed for decision tdec≈ 2-3 s• Bunch-crossing time: tevt≈ 25 ns• Need pipelines to hold data• Need fast response (e.g. dedicated detectors)

• Backgrounds are huge• Rejection factor ≈ 10,000

• Algorithms run on local, coarse data• Only calorimeter & muon information• Special-purpose hardware (ASICs, etc)

• Rates: steep function of thresholds• Ultimately, determines the physics

Page 11: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 11

Bunch Crossing Times: LEP, Tevatron & LHCBunch Crossing Times: LEP, Tevatron & LHC

3.5µs

LEP: e+e- Crossing rate 30 kHz

22µs–

396ns

25ns

LHC: pp Crossing rate 40 MHz

Particle Time of FligthDetector FrontEnd Digitizer

Data transportation to Control Room

Trigger Primitive Generation

Synchronization delay

Regional Trigger Processors

Global Trigger Processor

Level-1 signal distribution

Synchronization delay

Level-1 Accept/Reject

SPACE

TIME

Control Room Experiment

Light cone

~3µ

SPACE

TIME

Control Room

Experiment

Light cone

Lvl-1Front end pipelines

Readout buffers

Detector front end

Page 12: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 12

Trigger and readout structure at LHCTrigger and readout structure at LHC

≈ 30 Collisions/25ns( 10 9 event/sec )

107 channels(10 16 bit/sec)

Multilevel trigger and readout systems

Luminosity = 1034 cm-2 sec-1 25 ns

25ns

40 MHz

105 Hz

103 Hz

102 Hz

Trigger Rate

Lvl-1

Lvl-2

Lvl-3

Front end pipelines

Readout buffers

Processor farms

Switching network

Detectors

µsec

ms

sec

25ns

40 MHz

105 Hz

102 Hz

Trigger Rate

Lvl-1

HLT

Front end pipelines

Readout buffers

Processor farms

Switching network

Detectors

µsec

sec

ATLASCMS

Page 13: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 13

1

Analog MUX

ADC

Tx

Rx

40 MHz

Level-1

ADC

Tx

Rx

40 MHz

Level-1

- TRACKER

Tx

Rx

40 MHz

Level-1MUX/ADC

Time Tag

N buffers

t1t2

tn

Hit Finder

Tag

40 MHz

Level-1

Tx

Rx

SP

- PRESHOWER - CALORIMETERs- RPC

- PIXELs- CSC- DT

~ 60000 analog fibers

~ 1000digital fibers

~ 80000 digital fibers

~ 1000 an/dig fibers

High occupancy Low occupancy

CMS front-end readoutsCMS front-end readouts

Page 14: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 14

Level-1 Event StoragekHz MByte MByte/s

ATLAS 100 1 100

CMS 100 1 100

LHCb 400 0.1 20

ALICE 1 25 1500

LHC experiments trigger and DAQ summaryLHC experiments trigger and DAQ summary

Page 15: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 15

1970-80 MiniComputersFirst standard CAMAC Custom design• kByte/s

1980-90 MicroprocessorsIndustry standardsDistributed systems• MByte/s

2000 NetworksCommoditiesData and control networks• GByte/s

Detector

MiniComputer

Detector

HostComputer

µPµPFarms

Readout

Detector Frontend

Controls

Processing

Trigger

Networks

Evolution of DAQ technologies and architecturesEvolution of DAQ technologies and architectures

Page 16: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 16

LHC trigger and data acquisition systemsLHC trigger and data acquisition systems

COMPUTING SYSTEMSP :Primitive generatorsT :Trigger processorsE :Event Flow ControlsC :Detector ControlsR :Readout data formattersF :Event FiltersS :Computing ServicesRc :Run control

COMMUNICATION NETWORKSttc :Timing and trigger signals tdl :Trigger data linksdcl :Detector control linksdrl :Detector readout linksrcn :Readout control networkbcn :Builder control networkbdn:Builder data networkdsn :DAQ services and controlscsn :Computing services network

Rc

C

Detector Frontend

DAQ networks

T

T

F F F S S

R R R

C

C

E

Ptdl ttc drldcl

C

rcnfcn

tdl

bdn

LHC DAQ : A computing&communication network Alice

ATLAS LHCb

A single network cannot satisfy at once all the LHC requirements, therefore present LHC DAQ designs are implemented as multiple (specialized) networks

Page 17: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 17

40 MHz

105 Hz

102 Hz

100 Tbyte/s

100 Gbyte/s

100 Mbyte/s

CMS trigger and data acquisitionCMS trigger and data acquisition

Detector Frontend

Computing Services

ReadoutSystems

FilterSystems

Event Manager Builder Networks

Level 1Trigger

RunControl

Level-1 Maximum trigger rate 100 kHzSystem efficiency 98%Event Flow Control ≈106 Mssg/sBuilder network (512x512 port) ≥500-1000 Gb/s Event filter computing power ≈5 106 MIPS

No. Readout Units ≈512No. Builder Units ≈512No. Filter Unit ≈n x 512No. (C&D) Network ports ≈10000No. programmable units ≈10000

Page 18: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 18

10-2

100

102

104

106

108

10-8 10-6 10-4 10-2 100

25 ns ≈ µs ms sec

QED

W,Z

Top

Z*

Higgs

Available processing time

LEVEL-1 Trigger 40 MHz Hardwired processors (ASIC, FPGA) MASSIVE PARALLEL Pipelined Logic Systems

HIGH LEVEL TRIGGERS 100 kHzStandard processor FARMs

10-4

Rate (Hz)

≈ 1 µs

≈ 0.01 - 1 sec

Structure option: Two physical stages (CMS)Structure option: Two physical stages (CMS)

LV-1

HLT

µs

ms .. s

Detector Frontend

Computing Services

ReadoutSystems

FilterSystems

Event Manager Builder Networks

Level 1Trigger

RunControl

40 MHz

105 Hz

102 Hz

1000 Gb/s

- Reduces the number of building blocks. - Commercial components 'state of the art' memory, switch, CPU - Upgrades and scales with the machine performance

Page 19: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 19

Structure option: Three physical stages (ATLAS)Structure option: Three physical stages (ATLAS)

10-2

100

102

104

106

108

10-8 10-6 10-4 10-2 100

25 ns ≈ µs ms sec

QED

W,Z

Top

Z*

Higgs

Available processing time

LEVEL-1 Trigger 40 MHz Hardwired processors (ASIC, FPGA) MASSIVE PARALLEL Pipelined Logic Systems

HIGH LEVEL TRIGGERS 1kHzStandard processor FARMs

10-4

Rate (Hz)

≈ 1 µs≈ 0.1 - 1 sec

≈ 1 ms

SECOND LEVEL TRIGGERS 100 kHz SPECIALIZED processors (feature extraction and global logic)

RoI

LV-1

LV-2

LV-3

µs

ms

sec

Detector Frontend

Computing services

Event Manager

Level-1

Level-2

Readout

Farms

Builder NetworkSwitch

Switch

10 Gb/s103 Hz

40 MHz

102 Hz

105 Hz

Page 20: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 20

16 Million channels

100 kHz LEVEL-1 TRIGGER

1 Megabyte EVENT DATA

200 Gigabyte BUFFERS 500 Readout memories

3 Gigacell buffers

1 Terabit/s

Gigabit/s SERVICE LAN Petabyte ARCHIVE

Energy Tracks

Networks

1 Terabit/s (50000 DATA CHANNELS)

5 TeraIPS

EVENT BUILDER. A large switching network (512+512 ports) with a total throughput of approximately 500 Gbit/s forms the interconnection between the sources (Readout Dual Port Memory) and the destinations (switch to Farm Interface). The Event Manager collects the status and request of event filters and distributes event building commands (read/clear) to RDPMs

EVENT FILTER. It consists of a set of high performance commercial processors organized into many farms convenient for on-line and off-line applications. The farm architecture is such that a single CPU processes one event

40 MHz COLLISION RATE

Charge Time Pattern

Detectors

Computing services

Data communication and processing at LHCData communication and processing at LHC

Page 21: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 21

1 to 1. Trigger primitive readout 6000 1 Gb/s - Digital synchronous readout (monodirectional)- Transmitter may need Rad Hard 1 to 1. Frontend Analog readout 60000 links- Pixel, Silicon and MSGC- Synchronous 40 MHz. 256 samples (6µs) 1 to 1. Frontend Digital readout 100000 1 Gb/s links- Preshower, Calorimeter, Muon.(it may be duplex) - Transmitter may need Rad Hard 1 to 1. Frontend Control. Full duplex 100Mb/s links

- All detectors. Inner detector will need Rad Hard

1 to N. Fast signal distribution 10000 destinations - Frontend Timing, Trigger and Control (TTC) system.- Readout event builder message distribution. Message distribution

(≈100 bits) at few 100 kHz rateN to 1Signal broadcall (Event Flow Status). - Centralized message collection system for readout status, error

collection and event filter processor status. Messages of few bytes are generated at a rate of 1 kHz per node with a total of 500 nodes

N to N. Event builder (switch-I/O) data links- RDPM(PCI) to switch. Switch to PCI(FI) interfacing- Standard. 2000 1 Gb/s links (Sonet, FCS...)

Data communication at LHCData communication at LHC

Page 22: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 22

Event fragments : Event data fragments are stored in separated physical memory systems

Full events : Full event data are stored into one physical memory system associated to a processing unit

Event builder : Physical system interconnecting data sources with data destinations. It has to move each event data fragments into a same destination

Event buildingEvent building

Page 23: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 23

Event builder switch technologiesEvent builder switch technologies

Sonet

ATM

NETWORKS and TELECOMMUNICATIONS- Asynchronous Transfer Mode (ATM)

- Link bandwidth: 155, 622, 1244, 2488 Mb/s- Packet switch based on small cells (53 bytes)

PERIPHERAL NETWORK ( Massive Parallel Storage)- Fiber Channel System (FCS)

- Link bandwidth: 133, 266, 531, 1062 Mb/s- Ethernet

- Link bandwidth: 10, 100, 1000, 10000 Mb/s- Frame based data delivery

HIGH PERFORMANCE COMPUTING AND COMMUNICATION (HPCC)- Mercury RaceWay (160 Mbyte/s)- Myrinet (1280, 2560 Mb/s)- Computer manufacturers proprietary switch (Application Strategic Computing Initiative ASCI)

≈ 106 nodes

≈ 103 nodes

≈ 102 - 103 nodes

Page 24: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 24

Central tracking event at L = 1034 (50 ns integration, ≈ 1000 tracks, 1 MB data)

7 m

12.5 cmTwo orders of magnitude in computing power and bandwidth are required at the pp LHC experiments

Rate (Hz)

No. channels

SPSLear

HERA

LEP

107

106

105

104

103

102

108107106105104103

40 MHz

10 MHzCDF

PP-UA 250 KHz

330 KHz

45 KHz

LHC experiment

s

12.5 cm slice :: >> LEP event12.5 cm slice :: >> LEP event

Reconstruction at LHC: CMS central trackingReconstruction at LHC: CMS central tracking

Page 25: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 25

Detector Frontend

Event Manager Builder Networks

Level 1Trigger

Controls

≈ 50..100 kHz

Event data partitioned into about 500 separated memory units

Massive parallel systemONE event, ALL processors- Low latency- Complex I/O- Parallel programming

Farm of processorsONE event, ONE processor- High latency (larger buffers)- Simpler I/O- Sequential programming

≈ 200 Hz per BU Two architectures

Parallel processing by farmsParallel processing by farms

Page 26: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 26

High trigger levels: CPU farms- Clean particle signature- Finer granularity precise measurement- Kinematics. effective mass cuts and event topology

- Track reconstruction and detector matching- Event reconstruction and analysis

Level-1: Specialized processors- Particle identification: high p

T electron,

muon, jets, missing ET

- Local pattern recognition and energy evaluation on prompt macro-granular information from calorimeter and muon detectors

≈ 100 Hz

Detector Frontend

Computing Services

Event Manager Builder networks

Level 1Trigger

• •• •••••

•••••

Detector Frontend

Computing Services

Event Manager Builder networks

Level 1Trigger

Filter Farms

Up to 100 kHz

40 MHz

CMS trigger levelsCMS trigger levels

Page 27: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 27

Online event selection(rejection)Online event selection(rejection)

109 Ev/s

102Ev/s102Ev/s

99.99 %99.99 %

99.9 %99.9 %

0.1 %0.1 %

105 Ev/s 105 Ev/s

0.01 %0.01 %

Rejected 99.999% Accepted 0.001%

Page 28: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 28

FU

Computing and Communication Services

EVM

LV1

RU

Detector front-end readout

Ctrl

Collision rate 40 MHzLevel-1 Maximum trigger rate 100 kHz(*)

Average event size ≈ 1 MbyteEvent Flow Control ≈ 106 Mssg/s(*) The TriDAS system is designed to read 1 Mbyte data events up to 100 kHz level-1 trigger rate. In the first stage of implementation (Cost Book 9), the DAQ is scaled down (reduced number of RUs and FUs) to handle up to 75 kHz level 1 trigger rate

No. of In-Out units (RU&FU : 200-5000 byte/event) 1000Readout network (e.g. 512x512 switch) bandwidth ≈ 1 Terabit/s Event filter computing power ≈ 5 106 MIPSData production ≈ Tbyte/dayNo. of readout crates ≈ 250No. of electronics boards ≈ 10000

CMS final system: SummaryCMS final system: Summary

Page 29: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 29

Trigger and data acquisition trendsTrigger and data acquisition trends

Page 30: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 30

- Processing power increases by a factor 10 every 5 years

- Memory density increases by a factor 4 every two years

- The 90's... is the data communication decade

- Processing power increases by a factor 10 every 5 years

- Memory density increases by a factor 4 every two years

- The 90's... is the data communication decade

CMS main requirements. Summary

Event Flow Control ≈ 106 Mssg/s

Readout network (512x512 switch) bandwidth ≈ 1 Terabit/s

Event filter computing power ≈ 5 106 MIPS

Data production ≈ Tbyte/day

Technology ansatz (Moore's law)Technology ansatz (Moore's law)

Page 31: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 31

Technologies trend (Moore’s law)sTechnologies trend (Moore’s law)s

103

104

105

106

107

108

109

1970 1980 1990 2000

0.2

0.3

0.5

1

2

3

0.12 MHz

5 MHz

25 MHz

200 MHz

4 Kb

1 Mb

16 Mb

64 Kb

256 MbMemoryLogic

Feature size

No. Transistors

µm

1 Mb/s

10 Mb/s

1 Gb/s

10 Gb/s

Data Link(Ethernet)

Memory speed

~µs

~10ns

1 GHz

100 Mb/s

1992. CMS LoI

1994. CMS TP

2001. DAQ TDR

2005. LHC

- Processing power increases by a factor 10 every 5 years- Memory density increases by a factor 4 every two years- The 90's... is the data communication decade

- Processing power increases by a factor 10 every 5 years- Memory density increases by a factor 4 every two years- The 90's... is the data communication decade

Page 32: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 32

• 100 million new users expected online by 2001

• Internet traffic doubled every 100 days

• 5000 domain name added every day

• 1999: last year of the voice

• Need more bandwidth

->Terabit switches at LHC0

30

1996 20042000

Voice

DataTraffic load

20

10

CMS experiment technical proposal

CMS Data Acquisition commissioning

CMS Data Acquisition technical proposal

Internet growthInternet growth

Start of Data era

Page 33: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 33

Estimated Computational Resource Scaling Factors 1 FLOP/s Peak Compute1 Byte/FLOP/s Memory50 Byte/FLOP/s Disk0.05 Byte/FLOPS/s Peak Disk Parallel I/O0.001 Byte/FLOP/s Peak Parallel Archives I/O10000 Byte/FLOP/s Archive

Performances required by a LHC experiment (in 2005) versus high technology expectations

Accelerated Strategic Computing Initiative. ASCI 97 Accelerated Strategic Computing Initiative. ASCI 97

Page 34: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 34

ASCI (Accelerated Strategic Computing Initiative) home page URLhttp://www.llnl.gov/asci/

The purpose of the Path Forward Project's development and engineering alliances is to ensure the availability of the essential integrating and scaling technologies required to create a well-balanced, reliable and production capable computing environment providing large scale, scientific compute capability from commodity building blocks, at processing levels of 10-to-30 TFLOPS in the late 1999 to 2001 timeframe and 100 TFLOPS in the 2004 timeframe.

ASCI PathForward Program OverviewASCI PathForward Program Overview

Page 35: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 35

NEC 32 Tera, CompaQ HPCCNEC 32 Tera, CompaQ HPCC

Page 36: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 36

Mainframe Mini-Computer

vector Supercomputer

Processor farms : the 90's supercomputerProcessor farms : the 90's supercomputer

Page 37: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 37

After commodity farms what next?After commodity farms what next?

Fusion of data communication, data processing and data archive global resources : Grid approach ?Fusion of data communication, data processing and data archive global resources : Grid approach ?

Page 38: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 38

Raw Data:1000 Gbit/sRaw Data:1000 Gbit/s

5 TeraIPS5 TeraIPSEvents:

10 Gbit/sEvents:

10 Gbit/s

10 TeraIPS10 TeraIPS

Controls:1 Gbit/s

Controls:1 Gbit/s

To regional centers622 Mbit/s

To regional centers622 Mbit/s

Remote control rooms

Remote control rooms

Controls:1 Gbit/sControls:1 Gbit/s

CMS data flow and on(off) line computingCMS data flow and on(off) line computing

Detector Frontend

Computing Services

ReadoutSystems

FilterSystems

Event ManagerBuilder NetworksRun

Control

Level 1Trigger

Page 39: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 39

Event selection and computing stagesEvent selection and computing stages

LEVEL-1 TriggerHardwired processors (ASIC, FPGA) Pipelined massive parallel

HIGH LEVEL Triggers Farms of

processors

10-9 10-6 10-3 10-0 103 106 sec

25ns 3µs hour yearms

Reconstruction&ANALYSISTIER0/1/2

Centers

ON-lineOFF-line

sec

Giga Tera Petabit

Page 40: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 40

- The laboratory computing power available at CERN in 1980 (W and Z discovery) was comparable to that of a modern desktop computer (1995)

- The total number of processors in the LHC event filter equals the number of workstations and personal computers running today at CERN (≈10000 in 2000)

- The 'Slow Control' data rate of an LHC experiment (temperature, voltages, status etc.) is comparable the current LEP experiment event data rate (≈100 kByte/s)

- During ONE SECOND of LHC running, the data volume transmitted through the readout network is equivalent to

- the amount of data moved in ONE DAY by the present CERN network system or,

- the amount of information exchanged by WORLD TELECOM (≈ 100 million phone calls) or,

- the data exchanged by the WORLD WEB in Jan 2000. ----->(However in Jan 2001 it will be only 1/10 and so on)

Computing and communication perspectives at LHCComputing and communication perspectives at LHC

Page 41: Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC

European Laboratory for Particle Physics 41

Short history of computing and new frontiersShort history of computing and new frontiers

The origin Counting. Abacus 1600 Numeric techniques, Logarithms, Calculator engine 1800 Punch cards, Automates. Babbage difference engine driven on a program 1900 Punched cards electromechanical Holletith's tabulator, vacuum tube,

1940 Electronic digital and stored-program first computers1950-1960 First Generation

Commercial computers. UNIVAC. FORTRAN. First transistor1960-1965 Second Generation

General purpose computers IBM,DEC. COBOL, BASIC. Integrated circuits1965-1971 Third Generation Arpanet. First microprocessor chip. PASCAL, C and UNIX1971-1999 Forth Generation 1975 Minicomputer, Microcomputer. Window and mouse. Cray supercomputer. 1980 Personal Computer. Apple, Microsoft. Vector processors 1984 Parallel computing. Farms. OO/C++ 1990 Massive parallel computing and massive parallel storage. LANs and WANs. Internet. WEB 1995 Commodities and network/bandwidth explosion. Network computing. High Performance Computing ASCI initiative. 2000 Present Fusion of computing, communication and archiving. Grid….