Upload
vokhanh
View
216
Download
0
Embed Size (px)
Citation preview
Kostas KORDAS
INFN – Frascati
10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06)Siena, 1-5 Oct. 2006
The ATLAS Data Acquisition & Trigger:
concept, design & status
The ATLAS Data Acquisition & Trigger:
concept, design & status
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 2
ATLAS Trigger & DAQ: conceptATLAS Trigger & DAQ: conceptp p
40 MHz
~ 200 Hz
100 kHz
~ 3.5 kHz
EventFilter
LVL1
LVL2
~ 300 MB/s
~3+6 GB/s
160 GB/s
Full info / event:~ 1.6 MB/25ns
= 60 PB/s
• Algorithms on PC farms• seeded by previous level• decide fast• work w/ min. data volume
• Hardware based• No dead time
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 3
From the detector into the Level-1 TriggerFrom the detector into the Level-1 Trigger
Level 1
Trigger DAQ
2.5 µs
CaloMuTrCh Other detectors
FE Pipelines
40 MHz
40 MHz
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 4
Upon LVL1 accept: buffer data & get RoIsUpon LVL1 accept: buffer data & get RoIs
ROS
Level 1Det. R/O
Trigger DAQ
2.5 µs
CaloMuTrCh Other detectors
Read-Out Systems
L1 accept (100 kHz)
40 MHz
40 MHz
160 GB/sROD ROD ROD
ROB ROB ROB
Read-Out Drivers
Read-Out Buffers
Read-Out Links (S-LINK)
100 kHz
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 5
Region of Interest Builder
ROS
Level 1Det. R/O
Trigger DAQ
2.5 µs
CaloMuTrCh Other detectors
Read-Out Systems
RoI
L1 accept (100 kHz)
40 MHz
40 MHz
160 GB/sROD ROD ROD
ROB ROB ROBROIB
Read-Out Drivers
Read-Out Buffers
Read-Out Links (S-LINK)
100 kHz
On average, LVL1 finds
~2 Regions of Interest (in η−φ) per event
Upon LVL1 accept: buffer data & get RoIsUpon LVL1 accept: buffer data & get RoIs
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 6
A much
smaller ReadOut network
… at the cost of a higher
control traffic
LVL2: work with “interesting” ROSs/ROBsLVL2: work with “interesting” ROSs/ROBs
L2
ROS
Level 1Det. R/O
Trigger DAQ
2.5 µs
~10 ms
CaloMuTrCh Other detectors
Read-Out SystemsL2P L2N
RoI
RoI requests
L1 accept (100 kHz)
40 MHz
40 MHz
100 kHz 160 GB/s
~3 GB/s
ROD ROD ROD
ROB ROB ROBL2SVROIBLevel 2
LVL2 Supervisor
LVL2 Processing Units
Read-Out Buffers
RoI data(~2% of full event)
LVL2 Network
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 7
Trigger DAQCalo
MuTrCh
EB
L2
ROS
Level 1Det. R/O
2.5 µs
~10 ms
Other detectors
Read-Out Systems
L2P L2N
RoI
RoI data (~2%)
RoI requests
L2 accepts
L1 accept (100 kHz)
40 MHz
40 MHz
100 kHz
~3.5 kHz
160 GB/s
~3+6 GB/s
ROD ROD ROD
ROB ROB ROB
SFI
EBN
Event Builder
DFM
L2SVROIBLevel 2
Sub-Farm Input
Dataflow Manager
After LVL2: Event Builder makes full eventsAfter LVL2: Event Builder makes full events
EB Network
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 8
Event Filter: deals with Full EventsEvent Filter: deals with Full EventsTrigger DAQ
EB
L2
ROS
Level 1Det. R/O
2.5 µs
~10 ms
CaloMuTrCh Other detectors
Read-Out SystemsL2P L2N
RoI
RoI data (~2%)
RoI requests
L2 accept (~3.5 kHz)
L1 accept (100 kHz)
40 MHz
40 MHz
100 kHz
~3.5 kHz
160 GB/s
~3+6 GB/s
EFEFP
~ sec
ROD ROD ROD
ROB ROB ROB
SFI
EBN Event Builder
EFN
DFM
L2SVROIB
Event Filter
Level 2
Farm of PCsEvent Filter Network
Full Event Sub-Farm Input
~ 200 Hz
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 9
EB
L2
ROS
Level 1Det. R/O
2.5 µs
~10 ms
CaloMuTrCh Other detectors
Read-Out SystemsL2P L2N
RoI
RoI data (~2%)
RoI requests
L2 accept (~3.5 kHz)
L1 accept (100 kHz)
40 MHz
40 MHz
100 kHz
~3.5 kHz
160 GB/s
~3+6 GB/s
EFEFP
~ sec
ROD ROD ROD
ROB ROB ROB
SFI
EBN
Event Builder
EFN
DFM
L2SVROIB
Event Filter
Level 2
Event Filter Processors
Event Filter Network
SFOEF accept (~0.2 kHz)
~ 200 Hz ~ 300 MB/sSub-Farm Output
From Event Filter to Local (TDAQ) storageFrom Event Filter to Local (TDAQ) storage
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 10
Dataflow
EBHigh LevelTrigger
L2
ROS
Level 1Det. R/O
Trigger DAQ
2.5 µs
~10 ms
CaloMuTrCh Other detectors
Read-Out Systems
L2P L2N
RoI
RoI data (~2%)
RoI requests
L2 accept (~3.5 kHz)
SFO
L1 accept (100 kHz)
40 MHz
40 MHz
100 kHz
~3.5 kHz
~ 200 Hz
160 GB/s
~ 300 MB/s
~3+6 GB/s
EFEFP
~ sec
EF accept (~0.2 kHz)
ROD ROD ROD
ROB ROB ROB
SFI
EBN
Event Builder
EFN
DFM
L2SVROIB
Event Filter
Level 2
TDAQ, High Level Trigger & DataFlowTDAQ, High Level Trigger & DataFlow
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 11
Dataflow
EBHigh LevelTrigger
L2
ROS
Level 1Det. R/O
Trigger DAQ
2.5 µs
~10 ms
CaloMuTrCh Other detectors
Read-Out Systems
L2P L2N
RoI
RoI data (~2%)
RoI requests
L2 accept (~3.5 kHz)
SFO
L1 accept (100 kHz)
40 MHz
40 MHz
100 kHz
~3.5 kHz
~ 200 Hz
160 GB/s
~ 300 MB/s
~3+6 GB/s
EFEFP
~ sec
EF accept (~0.2 kHz)
ROD ROD ROD
ROB ROB ROB
SFI
EBN
Event Builder
EFN
DFM
L2SVROIB
Event Filter
Level 2
TDAQ, High Level Trigger & DataFlowTDAQ, High Level Trigger & DataFlowHigh Level Trigger (HLT)
• Algorithms developed offline (with HLT in mind)
• HLT Infrastructure (TDAQ job):– “steer” the order of
algorithm execution– Alternate steps of “feature
extraction” & “hypothesis testing”)
fast rejection (min. CPU)
– Reconstruction in Regions of Interest
min. processing time & network resources
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 12
Dataflow
EBHigh LevelTrigger
L2
ROS
Level 1Det. R/O
Trigger DAQ
2.5 µs
~10 ms
CaloMuTrCh Other detectors
L2P L2N
RoI
RoI data (~2%)
RoI requests
L2 accept (~3.5 kHz)
SFO
L1 accept (100 kHz)
40 MHz
EFEFP
~ sec
EF accept (~0.2 kHz)
ROD ROD ROD
ROB ROB ROB
SFI
EBN
EFN
DFM
L2SVROIB500nodes
100nodes
150nodes
1600nodes
Infrastructure Control Communication Databases
High Level Trigger & DataFlow: PCs (Linux)High Level Trigger & DataFlow: PCs (Linux)
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 13
TDAQ at the ATLAS siteTDAQ at the ATLAS site
SDX1
USA15
UX15
ATLASdetector
Read-Out
Drivers(RODs) First-
leveltrigger
Read-OutSubsystems
(ROSs)
UX15
USA15
Dedicated links
Timing Trigger Control (TTC)
1600Read-OutLinks
Gig
abit
Ethe
rnet
RoIBuilder
Reg
ions
Of I
nter
est
VME~150PCs
Data of events acceptedby first-level trigger
Eve
nt d
ata
requ
ests
Del
ete
com
man
ds
Req
uest
ed e
vent
dat
a
Event data pushed @ ≤ 100 kHz, 1600 fragments of ~ 1 kByte each
LVL2Super-visor
SDX1CERN computer centre
DataFlowManager
EventFilter(EF)
pROS
~ 500 ~1600
stores LVL2output
dual-CPU nodes~100 ~30
Network switches
Event data pulled:partial events@ ≤ 100 kHz, full events @ ~ 3 kHz
Event rate ~ 200 HzData
storage
LocalStorage
SubFarmOutputs
(SFOs)
LVL2 farm
Network switches
EventBuilder
SubFarmInputs
(SFIs)
Second-leveltrigger
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 14
SDX1
USA15
UX15
ATLASdetector
Read-Out
Drivers(RODs) First-
leveltrigger
Read-OutSubsystems
(ROSs)
UX15
USA15
Dedicated links
Timing Trigger Control (TTC)
1600Read-OutLinks
Gig
abit
Ethe
rnet
RoIBuilder
Reg
ions
Of I
nter
est
VME~150PCs
Data of events acceptedby first-level trigger
Eve
nt d
ata
requ
ests
Del
ete
com
man
ds
Req
uest
ed e
vent
dat
a
Event data pushed @ ≤ 100 kHz, 1600 fragments of ~ 1 kByte each
LVL2Super-visor
SDX1CERN computer centre
DataFlowManager
EventFilter(EF)
pROS
~ 500 ~1600
stores LVL2output
dual-CPU nodes~100 ~30
Network switches
Event data pulled:partial events@ ≤ 100 kHz, full events @ ~ 3 kHz
Event rate ~ 200 HzData
storage
LocalStorage
SubFarmOutputs
(SFOs)
LVL2 farm
Network switches
EventBuilder
SubFarmInputs
(SFIs)
Second-leveltrigger
“pre-series” DataFlow:~10% of final TDAQ
Used for realistic measurements, assessment and validation of TDAQ dataflow & HLT
TDAQ testbedsTDAQ testbeds
Large scale system tests (at PC clusters with ~700 nodes) demonstrated required system performance & scalability for online infrastructure
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 15
August 2006:
first combined cosmic ray run
• Muon section at feet of ATLAS
• Tile (HAD) Calorimeter
Triggered by Muon Trigger
Chambers
Muon + HAD Cal. cosmics run with LVL1Muon + HAD Cal. cosmics run with LVL1LVL1: Calorimeter, muon and
central trigger logics in production and installation
phases for both hardware & software
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 16
ROS units are PCs housing 12 Read Out Buffers, in
4 custom PCI-x cards (ROBIN)
ReadOut Systems: all 153 PCs in placeReadOut Systems: all 153 PCs in place
• All 153 ROSs installed and standalone commissioned In
put f
rom
det
ecto
rR
ead
Out
Driv
ers
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 17
ReadOut Systems: all 153 PCs in placeReadOut Systems: all 153 PCs in place
• All 153 ROSs installed and standalone commissioned
• 44 ROSs connected to detectors and fully commissioned:– Full LAr-barrel (EM),– Half of Tile (HAD) and the Central
Trigger Processor– Taking data with final DAQ
(Event Building at the ROS level)
• Commissioning of other detector read-outs: expect to complete most of it by end 2006
ROS units are PCs housing 12 Read Out Buffers, in
4 custom PCI-x cards (ROBIN)
Inpu
t fro
m d
etec
tor
Rea
d O
ut D
river
s
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 18
EM + HAD calo cosmics run using installed ROSsEM + HAD calo cosmics run using installed ROSs
18
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 19
Event Building needs: bandwidth decidesEvent Building needs: bandwidth decides
Read-OutSubsystems
(ROSs)
DFM
Network switches
EventBuilder(SFIs)
Gbitlinks
Gbit links
Throughput requirements:• LVL2 accept rate: 3.5 kHz EB; Event size 1.6 MB
5.6 GB/s total input
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 20
We need ~100 SFIs for full ATLAS
Network limited (fast CPUs):• Event building using 60-70% of Gbit
network~70 MB/s into each Event Building node (SFI)
Event Building needs: bandwidth decidesEvent Building needs: bandwidth decides
Read-OutSubsystems
(ROSs)
DFM
Network switches
EventBuilder(SFIs)
Gbitlinks
Gbit links
Throughput requirements:• LVL2 accept rate: 3.5 kHz EB; Event size 1.6 MB
5600 MB/s total input
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 21
For HLT, CPU power is importantFor HLT, CPU power is important• At TDR we assumed:
– 100 kHz LVL1 accept rate– 500 dual-CPU PCs for LVL2
– each CPU has to do 100Hz– 10ms average latency
per event in each CPU
Assumed: 8 GHz per CPU at LVL2
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 22
For HLT, CPU power is importantFor HLT, CPU power is important
Scaling of a dual-core dual-CPU Processor
0
100
200
300
400
500
1 2 3 4 5 6
Number of Processes
Achi
eved
LV
L2 R
ate
(Hz)
Series1
Preloaded ROS w/ muonevents, run muFast @ LVL2
Test with AMD dual-core, dual CPU@ 1.8 GHz, 4 GB total
We should reach necessary performance per PC(the more we wait, the better machines we’ll get)
• At TDR we assumed:– 100 kHz LVL1 accept rate– 500 dual-CPU PCs for LVL2
– each CPU has to do 100Hz– 10ms average latency
per event in each CPU
Assumed: 8 GHz per CPU at LVL2
8 GHz per CPU will not come (soon)But, dual-core dual-CPU PCs show scaling.
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 23
Online infrastructure:• A useful fraction operational since last year. Growing according to need• Final network almost done
DAQ / HLT commissioningDAQ / HLT commissioning
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 24
DAQ / HLT commissioningDAQ / HLT commissioning
~ 300 machines on
final network
First DAQ/HLT-I slice of final system within weeks• 153 ROSs (done)• 47 Event Building + HLT-Infrastructure PCs • 20 Local File Servers, 24 Loc. Switches• 20 Operations PCsMight add Pre-series L2 (30 PCs) and EF (12 PCs) racks
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 25
DAQ / HLT commissioningDAQ / HLT commissioning
First 4 full racks of HLT machines (~100) early 2007Another 500 to 600 machines can be procured within 2007 Rest, not before 2008.
~ 300 machines on
final network
First DAQ/HLT-I slice of final system within weeks• 153 ROSs (done)• 47 Event Building + HLT-Infrastructure PCs • 20 Local File Servers, 24 Loc. Switches• 20 Operations PCsMight add Pre-series L2 (30 PCs) and EF (12 PCs) racks
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 26
DAQ / HLT commissioningDAQ / HLT commissioning
~ 300 machines on
final network
First DAQ/HLT-I slice of final system within weeks• 153 ROSs (done)• 47 Event Building + HLT-Infrastructure PCs • 20 Local File Servers, 24 Loc. Switches• 20 Operations PCsMight add Pre-series L2 (30 PCs) and EF (12 PCs) racks
First 4 full racks of HLT machines (~100) early 2007Another ~500 machines can be procured within 2007 Rest, not before 2008.
TDAQ will provide significant trigger rates (LVL1, LVL2, EF) in 2007–LVL1 rate 40 kHz–EB rate 1.9 kHz–physics storage rate up to 85 Hz–final bandwidth for storage – calibration
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 27
• ATLAS TDAQ design:– 3-level trigger hierarchy– LVL2 works with Regions of Interest: small data movement– Feature extraction + hypothesis testing: fast rejection min. CPU power
SummarySummary
• Architecture has been validated via deployment of testbeds
• We are in the installation phase of system• Cosmic runs with Central Calorimeters + muon system
• An initial but fully functional TDAQ system will be installed, commissioned and integrated with Detectors till end of 2006
•TDAQ will provide significant trigger rates (LVL1, LVL2, EF) in 2007
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 28
Thank you
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 29
ATLAS Trigger & DAQ: RoI conceptATLAS Trigger & DAQ: RoI concept
4 RoIη−φ addresses
In this example:
4 Regions of Interest:
2 muons,
2 electrons
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 30
Inner detector
Calorimetry
Muon system
ATLAS total event size = 1.5 MBTotal no. ROLs = 1600
Trigger
Channels No. ROLs Fragment size - kB
MDT 3.7x105 192 0.8
CSC 6.7x104 32 0.2
RPC 3.5x105 32 0.38
TGC 4.4x105 16 0.38
Channels No. ROLs Fragment size - kB
LAr 1.8x105 764 0.75
Tile 104 64 0.75
Channels No. ROLs Fragment size - kB
LVL1 56 1.2
Channels No. ROLs Fragment size - kB
Pixels 0.8x108 120 0.5
SCT 6.2x106 92 1.1
TRT 3.7x105 232 1.2
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 31
• L2SV gets RoI info from RoIB• Assigns a L2PU to work on event• Load-balances its’ L2PU sub-farm
• Can scheme cope with LVL1 rate?
• Test with preloaded RoI infointo RoIB, which triggers TDAQ chain, emulating LVL1
• LVL2 system is able to sustain the LVL1 input rate:– 1 L2SV system for LVL1 rate ~ 35 kHz– 2 L2SV system for LVL1 rate ~ 70 kHz (50%-50% sharing)
Scalability of LVL2 systemScalability of LVL2 systemRoIB -> 1 or 2 L2SVs.
Each L2SV->1-8L2PUs
34
35
36
1 3 5 7
# L2PUs
LVL1
rate
(KH
z)
1L2SV-12L2SV-12L2SV-2
Rate per L2SV stable within 1.5%
ATLAS will have a handful of L2SVs can easily manage 100 kHz LVL1 rate
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 32
Data File
LVL2 Ltcy
ProcessTime
RoICollTime
RoICollSize
# Req/Evt
(ms) (ms) (ms) (bytes)
µ 3.4 2.8 0.6 287 1.3
di-jet 3.6 3.3 0.3 2785 1.2e 17.2 15.5 1.7 15820 7.4
Tests of LVL2 algorithms & RoI collectionTests of LVL2 algorithms & RoI collection
2) Processing takes ~all latency:
small RoI data collection time
Note: Neither Trigger menu, nor data files representative mix of ATLAS (this is the aim for a late 2006 milestone)
3) Small RoI data request
per event
Electron sample
is pre-selected
1) Majority of events rejected
fast
Di-jet, µ & e simulated events preloaded on ROSs; RoI info on L2SV
L2SV
L2PU
pROSEmulatedROS
8
1
1
pROS1
DFM
1
Plus:• 1 Online Server• 1 MySQL data base server
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 33
ATLAS Trigger & DAQ: needATLAS Trigger & DAQ: need
40 MHz
~ 200 Hz ~ 300 MB/s
p p
Need high luminosity to get to observe the (rare)
very interesting events
Need on-line selection to write to disk
mostly the interesting events
Full info / event:~ 1.6 MB/25ns= 60k TB/s
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 34
ATLAS Trigger & DAQ: LVL1 conceptATLAS Trigger & DAQ: LVL1 concept
40 MHz
~ 200 Hz ~ 300 MB/s
100 kHz
Full info / event: ~ 1.6 MB/25ns
160 GB/s
p p
LVL1• Hardware based• No dead-time• Calo & Muon info (coarse granularity)• Identify Regions of Interest for next Trigger Level
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 35
ATLAS Trigger & DAQ: LVL2 conceptATLAS Trigger & DAQ: LVL2 concept
40 MHz
~ 200 Hz ~ 300 MB/s
100 kHz
~ 3.5 kHz
Full info / event: ~ 1.6 MB/25ns
~3+6 GB/s
160 GB/s
p p
LVL2
• Software (specialized algorithms)• Use LVL1 Regions of Interest• All sub-detectors : full granularity• Emphasis on early rejection
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 36
ATLAS Trigger & DAQ: Event Filter conceptATLAS Trigger & DAQ: Event Filter concept
40 MHz
~ 200 Hz ~ 300 MB/s
100 kHz
~ 3.5 kHz
Full info / event: ~ 1.6 MB/25ns
~3+6 GB/s
160 GB/s
p p
EventFilter
• Offline algorithms• Seeded by LVL2 Result• Work with full event• Full calibration/alignment info
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 37
ATLAS Trigger & DAQ: concept summaryATLAS Trigger & DAQ: concept summary
40 MHz
~100 kHz
2.5 µs
~3 kHz
~10 ms
~ 1 s
~200 Hz
Muon
LVL1
Calo Inner
PipelineMemories
RatesLatency
RoI
LVL2
Event builder cluster
Local Storage: ~ 300 MB/s
Read-OutSubsystems
hostingRead-Out
Buffers
Event Filterfarm
EF
ROBROBROBROBROBROBROBROBROBROBROBROBROBROBROBROBROBROB
ROBROBROBROBROBROB
Hardware based (FPGA, ASIC)Hardware based (FPGA, ASIC)Calo/MuonCalo/Muon (coarse granularity)(coarse granularity)
Software (specialised Software (specialised algsalgs))Uses LVL1 Uses LVL1 Regions of InterestRegions of InterestAllAll subsub--detsdets, , fullfull granularitygranularityEmphasis on early rejectionEmphasis on early rejection
Offline algorithmsOffline algorithmsSeeded by Seeded by LVL2 resultLVL2 resultWork with Work with full eventfull eventFull calibration/alignment infoFull calibration/alignment info
Hig
h L
evel
Tri
gg
er
Hig
h L
evel
Tri
gg
er
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 38
Dataflow
EBHigh LevelTrigger
L2
ROS
Level 1Det. R/O
Trigger DAQ
2.5 µs
~10 ms
CaloMuTrCh Other detectors
Read-Out Systems
L2P L2N
RoI
RoI data (~2%)
RoI requests
L2 accept (~3.5 kHz)
SFO
L1 accept (100 kHz)
40 MHz
EFEFP
~ sec
EF accept (~0.2 kHz)
ROD ROD ROD
ROB ROB ROB
SFI
EBN
Event Builder
EFN
DFM
L2SVROIB
Event Filter
Level 2
ATLAS Trigger & DAQ: designATLAS Trigger & DAQ: design
40 MHz
~ 200 Hz ~ 300 MB/s
100 kHz
~ 3.5 kHz
Full info / event: ~ 1.6 MB/25ns
~3+6 GB/s
160 GB/s
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 39
Dataflow
EBHigh LevelTrigger
L2
ROS
Level 1Det. R/O
Trigger DAQ
2.5 µs
~10 ms
CaloMuTrCh Other detectors
Read-Out Systems
L2P L2N
RoI
RoI data (~2%)
RoI requests
L2 accept (~3.5 kHz)
SFO
L1 accept (100 kHz)
40 MHz
EFEFP
~ sec
EF accept (~0.2 kHz)
ROD ROD ROD
ROB ROB ROB
SFI
EBN
EFN
DFM
L2SVROIB
Event Filter
Level 2160 GB/s
~ 300 MB/s
~3+6 GB/s
Event Builder
40 MHz
~100 kHz
2.5 µs
~3.5 kHz
~10 ms
~ 1 s
~200 Hz
RatesLatency
High Level Trigger & DataFlow: recapHigh Level Trigger & DataFlow: recap
IPRD06, 1-5 Oct. 2006, Siena, Italy ATLAS TDAQ concept, design & status - Kostas KORDAS 40
Read-Out
Drivers(RODs) First-
leveltrigger
Read-OutSubsystems
(ROSs)
USA15
Dedicated links
Timing Trigger Control (TTC)
1600Read-OutLinks
Gig
abit
Ethe
rnet
RoIBuilder
Reg
ions
Of I
nter
est
VME~150PCs
Data of events acceptedby first-level trigger
Eve
nt d
ata
requ
ests
Del
ete
com
man
ds
Req
uest
ed e
vent
dat
aLVL2
Super-visor
SDX1
DataFlowManager
EventFilter(EF)
pROS
~ 500 ~1600
stores LVL2output
dual-CPU nodes~100 ~30
Network switches
LocalStorage
SubFarmOutputs
(SFOs)
LVL2 farm
Network switches
EventBuilder
SubFarmInputs
(SFIs)
Second-leveltrigger