Jin Huang (BNL) Thanks to discussions from Nils · 02/04/2018  · sPHENIX use a next-generation...

Preview:

Citation preview

Jin Huang (BNL)Thanks to discussions from Nils

Versatility of EIC event topology calls for trigger-less DAQ◦ 0.5 MHz interaction at top luminosity

Start using sPHENIX-EIC full detector simulation to estimate trigger-less DAQ◦ Total data rate on order of 100 Gbps

Match well with sPHENIX TPC/MVTX DAQ through-put rate◦ Similar architecture with ATLAS/LHCb/ALICE DAQ

upgrade in 2020+

Leverage streaming readout and EIC-related detector R&D at sPHENIX

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 2

I was asked to give a presentation in EIC stream readout meeting about PHENIX/sPHENIX experience

Taking the opportunity to look into a possible stream readout for the EIC detector based on sPHENIX. ◦ What is the data volume? ◦ Reuse parts of sPHENIX DAQ?◦ Event building or stream readout?

Event building is not easy since eRHIC has 10ns crossing and BBC+calorimeter do not cover all interested cross section

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 3

sPHENIX use a next-generation gate-less TPC for outer central tracking Require stream readout FEE by design SAMPA ASIC : 20 Mhz 10-bit ADC , hit wavelet forming in chip 3.6 Tbps bi-direction fiber link to counting house

Cold QCD TG meetingJin Huang <jihuang@bnl.gov>

Buffer box

FEE data stream: 600 6.6+ Gbps bidirectional fibers linksMax continuous: 4 Gbps / fiberAverage continuous: 1.6 Gbps x 600 fibers

TPC DAM L3 Scope, 1.2.6

Output data stream to buffer box: 24 x 10 Gbps Ethernet via fiberAfter FPGA based triggering and clustering

Total continuous output to disk: 80 Gbps

DAQ L2 ScopeWBS 1.6

Clock/Trigger input: Optical linksClock = 9.4 MHzTrigger Rate = 15 kHz

FEE L3 Scope1.2.5

Structure/GEM L3 Scopes

WBS 1.2.1-4

Transfer to RCF continuously

150k chan

600 FEEs

24 sectors

Continuous

readout

4

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 5

ATLAS DAQ structure for mid-2020s:

• Using high performance large-FPGA optical link cardto bridge custom FEE and commodity computing

• Besides ATLAS, similar architecture also proposed for ALICE and LHCb upgrades

• 48 x 10 Gbps bi-directional optical link to FEE• Kintex7 Ultrascale FPGA• 100 Gbps PCIe Gen3 x16 link to CPU

Front-End Link eXchange (FELIX)FPGA-based PCI express card

FELIX 2.0 card in pre-production

Currently used in sPHENIX test stands

• You can also consider it as a 48-link network switchrunning specialized network protocol for 1 FEE/linkData reduction on FPGA, optimized for cost / link

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 6

BNL EIC taskforce studieshttps://wiki.bnl.gov/eic/index.php/Detector_Design_Requirements

Considering the case of 20+250 GeV e+p collisions at luminosity of 1034 cm-2 s-1

◦ 100x p+p luminosity

Cross section: 54 µb◦ Based on EIC task-force simulation

/gpfs/mnt/gpfs04/sphenix/user/nfeege/data/copy_eic_data_PYTHIA_ep/TREES/pythia.ep.20x250.1Mevents.RadCor=0.root

◦ 0.1% of p+p cross section

Collision rate: 500 kHz◦ 10% of p+p collision rate

Multiplicity: 1 /dη @ η=0◦ Fraction of p+p multiplicity

Total particle rate: a few M particle /s◦ sPHENIX AuAu records 30M particle /s

sPHENIX DAQ bandwidth may be able to accommodate full EIC data without triggering (full stream readout)◦ Event forming and reconstruction either in

online / offline computing farm

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 7

DIS collision @ Q2 ~ 100 (GeV/c)2

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 8

Multiplicity check for all particlesMinimal bias Pythia6 e+p 20 GeV + 250 GeV53 µb cross section

BNL EIC taskforce studieshttps://wiki.bnl.gov/eic/index.php/Detector_Design_Requirements

Based on EIC task-force pythia set /gpfs/mnt/gpfs04/sphenix/user/nfeege/data/copy_eic_data_P

YTHIA_ep/TREES/pythia.ep.20x250.1Mevents.RadCor=0.root

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 9

Raw data: 16 bit / MAPS hit

Raw data: 3x5 10 bit / TPC hit + headers (60 bits)

Raw data: 3x5 10 bit / GEM hit + headers (60 bits)

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 10

Raw data: 31x 14 bit / active tower +padding + headers ~ 512 bits / active tower

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 11

Raw data: 31x 14 bit / active tower +padding + headers ~ 512 bits / active tower

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 12

Obviously this is very preliminary, and evaluated at top lumi 1034 cm-2 s-1: Tracker+ calorimeter FEE output signal only = 32 Gbps, with header ~40 Gbps

◦ minimal account to bring out of tracker+ calo FEE via fiber ◦ No noise, and no PID detectors

FEE output with headers ~ 40Gbps * 1.5(+PID) * 2 (noise) ~ order of 100 Gbps ◦ A earlier estimate by BNL EIC taskforce BeAST detector:

Record 2.5M track / s → 160 Gbps (T. Ljubicic). In the same ball park◦ sPHENIX TPC accommodate a 4 Tbps FEE -> counting house traffic

sPHENIX TPC/MAPS DAQ is designed to record 100 Gbps @ 30M track /s The sPHENIX DAQ may serve as foundation for stream readout in EIC, that record all

detector hit at EIC without need of event building

EIC detector’s data rate for all collisions (~100 Gbps) seem fit into the sPHENIX DAQ bandwidth for disk recording◦ Online/offline computing farm perform collision ID, event

association and reconstruction

4 Tbps (FEE-link) / 100 Gbps (disk) DAQ for sPHENIX TPC/MAPS can provide a solid foundation that accommodate stream readout of an EIC detector◦ On-going: advanced R&D and beam tests◦ FEE -> fiber -> PCIe stream readout for TPC&MAPS FEE,

consistent with ATLAS/ALICE/LHCb in 2020+

May become a section in the new LOI

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 13

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 14

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 15

sPHENIX use a next-generation gate-less TPC for outer central tracking, Require stream readout FEE by design

SAMPA ASIC : 20 MHz 10-bit ADC , hit wavelet forming in chip. 3.6 Tbps bi-direction fiber link to counting house

Triggered readout for sPHENIX (save 75% of physics data size), Plan to also support full stream readout with firmware switch on FELIX FPGA (for commissioning / testing)

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 16

sPHENIX calorimeter: T1044

INTT: T1439 sPHENIX Si strip test

MVTX: T1441 sPHENIX MAPS test

TPC prototype being prepared tooMAPS Online monitoring with 4 staves in beamPreliminary alignment, ~O(100um)

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 17

100kHz collision in continuous DAQ triggerIn TPC DAQ simulation

FEE -> DAM limit : 6 Gbps x 8b/10b per FEEReference design rate: 1.9 Gbps, far lower than limitMax rate: 200kHz + 48 rings → max 7.2 Gbps @ module 1

― All collisions― Triggered collisions

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 18

Collect data from 600 bi-directional 4+ Gbps fiber links to FEEs at rate of 940 Gbps

Reducing data via triggering, clustering and compression to reduce data to ~80 Gbps

Driven to use large-FPGA based data acquisition cards with multi-10Gbps input and PCIe Gen3 output, hosted on commodity servers.

◦ Similar architecture adopted for ATLAS, LHCb, ALICE upgrade for 2020+

Default implementation taking advantage of BNL-developed FELIX PCIe card

◦ 48-bidirectional 10 Gbps optical link, large FPGA: Xlinx Ultrascale (XCKU115)

◦ PCIe Gen3 x16 link to server, demonstrated to 101 Gbps (require ~10 Gbps)

ServerFELIX v1.5 card FELIX in serverMTP <-> LC BreakoutFEE Prototype

Cold QCD TG meetingJin Huang <jihuang@bnl.gov> 19

Online display

PHENIX event builder/ Data storage

Standalone data(calibration, etc.)

FPHX Chip

Sensor

HDI

Ionizing Hit

IR

DA

Q R

oom

768 fibers

1.9 Tb/s

17k LVDS

3.2 Tb/s

48 fibers

~8kHz trigger

8 fibers

Data cable/bandwidth shown on this slide only

PHENIX Timing

Timing / Trigger

Stream

Triggered d

ata to d

isks

Recommended