25
1 Status of Charge Readout Electronics and DAQ 10/11/2018

Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

1

Status of Charge Readout Electronics and DAQ

10/11/2018

Page 2: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

ProtoDUNE-DP accessible cold front-end electronics and uTCA DAQ system 7680 ch

Cryogenic ASIC amplifiers (CMOS 0.35um)

16ch externally accessible:

• Working at 110K at the bottom of the signal

chimneys

• Cards fixed to a plug accessible from outside

Short cables capacitance, low noise at low T

Digital electronics at warm on the tank deck:• Architecture based on uTCA standard

• 1 crate/signal chimney, 640 channels/crate

12 uTCA crates, 10 AMC cards/crate, 64 ch/card

2

ASICs 16 ch.(CMOS 0.35 um)

Full accessibility provided by the double-phase charge readout at the top of the detector

uTCA crate

Signal chimney

CRP

Warm

Cold

FE cards mountedon insertion blades

Page 3: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

3

Global uTCA DAQ architecture for ProtoDUNE-DPintegrated with White Rabbit (WR) Time and Trigger distribution network

White Rabbit slaves MCH nodes in uTCA crates + WR system (time source, Grand Master,

trigger system)

WR

Trigger

Server

LV1 events

builders

7 + 6

10Gbe links

LV2 events

builders

Online

Storage

and

processing

Farm

20 GB/s

disk

bandwidth

40 Gbe

backbone

Page 4: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

4

Analog cryogenic FE: Cryogenic ASIC amplifiers DP-V3, 0.35um CMOS production

performed at the beginning of 2016

uTCA 64 channels AMC digitization cards (2.5 MHz, 12 bits output, 10 GbE connectivity)

20 cards operational on the 3x1x1 since the fall 2016 Production or remaining AMC cards for 6x6x6 launched

in 2017: completed, batch of 120 cards for 4 CRPs fully tested

White Rabbit timing/trigger distribution system: Components produced in 2016 for the entire 6x6x6, full

system operational on the 3x1x1 since the fall 2016

AMC digitization cards:

Charge readout electronics components: (R&D at IPNL since 2006, long standing effort aimed at producing low cost electronics)

64 channels FE cards with 4 cryogenic ASIC amplifiers First batch of 20 cards (1280 channels) operational on the

3x1x1 since the fall 2016 Production or remaining FE cards for 6x6x6 launched in

2017: completed, batch of 120 cards for 4 CRPs fully tested

Main components ASIC amplifiers, ADCs, FPGAs, IDT memories already procured in 2015-2016. 3x1x1 pre-production batch in 2016.

Page 5: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

uTCA crates and MCH for the 12 chimney of the

6x6x6 delivered:

6 uTCA crates procured with Finnish funds

delivered and tested in April

Additional 6 uTCA crates needed for the 4 CRPs

configuration procured with KEK funding,

delivered in September

Low noise power supply system for analog FE electronics + filtering system procured by IPNL

Wiener MPOD LV crate MPOD MICRO2 LXLV Crate Mpod Micro 800 W + 2 modules MPV 4008I

Additional timing material from IPNL (Trigger server+ GPS + White Rabbit Grand Master already used on 3x1x1) to be installed in the White Rabbit cage

Page 6: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

Implementation of the NP02 back-end/storage/processing system

The NP02 back-end/storage/processing system consists of :

1. two levels of event builders machines , two LV_1 machines, and four LV_2 machines

2. the network infrastructure

3. the online storage/processing facility.

The LV1 event builders task is to receive in input the data flow from the front-end system (12

uTCA crates for charge readout +1 for light readout, all connected to the back-end system

with a private network based on Ethernet optical links operating at 10 Gbit/s)

Several LV2 event builders working in parallel put together the events halves and cluster

them in data files of 3GB each, and write these data files into the high performance

distributed local EOS storage servers (20 Gbytes/s bandwidth)

6

Page 7: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

7

Details of ProtoDUNE dual-phase back-end architecture

Page 8: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

Network infrastructure was designed by IPNL in collaboration with Neutrino Platform and IT: 40

Gbit/s DAQ switch + 40/10 Gbit/s router procured by CERN: installation completed in January

2018

Back-end: the events builders are interconnected among them by a dedicated switch based on

40GBe 80 Gbit/s bandwidth connectivity per event builder

EVB

L1AEVB

L1B

EVB

L2B

EVB

L2C

EVB

L2D

EVB

L2A

Dataflow switch:

Brocade ICX7750-

26Q 26*40Gb/s

NP02 router: Brocade

ICX 7750-48F

6*40Gb/s + 48*10Gb/s

NP02 router connects the back-end to the online storage facility, the online

farm and the IT division via a dedicated 40 Gbit/s link

Network infrastructure backbone elements

8

Page 9: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

9

Event builders

Level 1: data are transferred from the network to the RAM of two level 1 machines. The task of

each machine is to put together data from the uTCA crates for the same drift window

corresponding to half of the detector, one of the two machines adds also the light readout data.

Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card,

2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet QSFP+, CPU type Intel XEON Gold 5122 3.6

GHz, 4 cores, 8 threads).

The two LV1 were procured with CERN funding

Level 2: the data from the two L1 event builders are sent via the network to four level 2 machines

working in parallel. The task of each machine is to put together the two events halves in a

complete event and assemble multi-events files to be written on disk

Four machines DELL R730 are used: they have similar specifications as the LV1 but need less

connectivity (since there is no need for the x8 10 Gbit/s links in input) and need less RAM

memory for the event building (192 GB). (192 GB RAM, 2 Mellanox Connect X3 , CPU type

Intel XEON Gold 5122 3.6 GHz, 4 cores)

The LV2 event builders have been procured with CERN (2 machines) and KEK (2 machines)

funding

DELL

R730

The event builders were

delivered in August 2018,

installed and fully configured in

September 2018

Page 10: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

10

High bandwidth (20 GBytes/s) distributed EOS file

system for the online storage facility

The system of storage servers recovered from CCIN2P3 was

installed at CERN in September 2017. It includes 20 machines

DELL R510 + 5 spares (72 TB per machine, up to 1.44 PB

total disk space for 20 machines) each machine of them has

10 Gbit/s connectivity.

The storage system operates under a distributed file-system

based on EOS (This file system was accurately chosen after a

long tests campaign); it is capable of handling a data bandwidth

of 20 GBytes/s.

The datafiles hosted in the online storage facility, are moved to

CERN IT by using a dedicated 40 Gbits link

9 Poweredge R610 service units for the DAQ cluster, procured

from CCIN2P3, were installed in May 2018

Storage facilityStorage facility

2 LV1 +4 LV2Event Builders

DAQService machines

The data flow to CERN IT has been extensively

tested during different data challenges also

jointly with SP: a transfer rate of 20 Gb/s can be

routinely achieved, with peak data flow reaching

35Gb/s (April 2018)

Page 11: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

Online processing farm

~1k cores procured by CERN installed in June 2017, 12 racks, 10 Gbit/s connectivity per rack

Additional 40 servers Poweredge C6220 from CCIN2P3 recently installed more than doubling the total computing power of the online farm

The farm operates a system of batch queues for the submission of the online processing jobs

11

Online computingfarm

New C6200 servers

Page 12: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

Storage

facility

Storage

facility

EVBL1A

EVBL1B

EVBL2A

EVBL2B

EVBL2C

EVBL2D

9 DAQ service

machines

DAQ service machine:

9 servers R610 provided by

CCIN2P3: metadata servers,

configuration server, online

processing server, batch

management server ...

Router and

switches

NP02 DAQ room

12

Page 13: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

13

FE analog and digital electronics and uTCA crates

High bandwidth (20GBytes/s) distributed EOS file system for the online storage facility

Storage servers recovered from CCIN2P3: 20 machines + 5 spares, installed at CERN in September 2017 (DELL R510, 72 TB per machine): up to 1.44 PB total disk space for 20 machines, 10 Gbit/s connectivity for each storage server.

Online computing farm: ~1k cores procured by CERN installed in June 2017, 12 racks, 10 Gbit/s connectivity per rack Additional 40 servers Poweredge C6200 from CCINP3 recently installed more than doubling the computing power of the online farm

DAQ back-end/online storage and processing facility network architecture:

Network infrastructure designed in collaboration with Neutrino Platform and IT: 40 Gbit/s DAQ switch + 40/10 Gbit/s router procured by CERN: installation completed in January 2018

Data challenge in April 2018: transfer rate to CERN central EOS steady 20 Gbit/s (peak 35 Gbit/s) over the 40 Gbit/s link to IT

9 Poweredge R610 service units of the DAQ cluster procured from CCIN2P3, installed in May 2018

DAQ back-end: 2 LV1 event builders + 2 LV2 event builders procured by CERN + 2 LV2 event builders procured by KEK: Installed in August 2018

Storage facilityStorage facility

2 LV1 +4 LV2Event Builders

DAQService machines

Online computingfarm

New C6200 servers

All electronics/DAQ system in hands for entire 6x6x6 (4 CRPs configuration)

Page 14: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

Cabling

Optical fiber connections for the DAQ for 13 uTCA crates + Trigger server: on

cryostat roof 20 m connections and White Rabbit cage (40 m connections) to DAQ

room (100 m multi-cable connector MPO/MTP + patch panels in DAQ room and on

cryostat roof)

White Rabbit fiber connections cryostat roof and White Rabbit cage (patch panel

and 40m multi-cable connector MPO/MTP)

LV filtering system and connections to chimneys up to 10 m shielded cables

connections

VHDCI shielded cables from chimneys to uTCA crates (240 cables)

All funded and procured by IPNL for the 4 CRPs configuration, cabling material

includes safe margins in length for the installation paths

Discussions two weeks ago with Filippo to start asap the installation of the optical

fibers network for DAQ+WR (multi-cables and patch panels)

Page 15: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

Fiber patches, Patch panels, MTP Multi-fiber cables, opticaltransceivers fromComplete Connect(UK)

DAQ ROOMTo EVBs

Cryostat roof

Data

WR

WR cage

Page 16: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

Safe margins (20 m for the fibers, 10 m for LV shielded cables) , spares

DAQ fibers patch panel

Page 17: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

Power supply generates V1,V2,V3,V4,V5

Filter and distribution box

FE unit 1

FE unit 2

FE unit 3

FE unit 1

FE unit 2

FE unit 4

FE unit 1

FE unit 2

FE unit 3

FE unit 1

FE unit 2

FE unit 4

FE unit 1

FE unit 2

FE unit 3

FE unit 1

FE unit 2

FE unit 4

Multiconductor shielded cables connecting V1,V2,V3,V4,V5,GND + sense FE units

General LV distribution layout

Front-End Units (chimneys warm flanges)

10 mMultiwireshieldedcables

Close distance

LV crate Wiener MPOD MICRO2 LX 800W with 2 modules MPV 4008I

4 shielded cables MPV Sub-D 37, 30 CM for high currents

Page 18: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

Example of a filtering/distribution box set up for the 3x1x1 detector prototype at CERN

• 1 multi-wire shielded cable in input connected to 3 power supplies generating V1,V2,V3,V4,V5

• 4 multi-wire shielded cables in output connected to FE units, no sense

Currents per FE unit in the final detector are doubled with respect to the 3x1x110 cards 20 cards

The basic card in the FE unit can be replicated x3 in order to feed 3 groups of 4 chimneys

InputV1,V2,V3,V4,V5

Output cables to FE units

Filtering and distribution box

Page 19: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

DAQ ROOM

WR Cage

ROOF

100 m multi-cableconnectivity DAQ room to patch panel on Cryostat roof

40 m multi-cable for WR between roof and WR cage

WR cage hosting Trigger server + GPS + Grand Master

Page 20: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet
Page 21: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

Event builder, network, GPS/White Rabbit GM,WR Trigger PC

Signal Chimneys and uTCA crates

6x6x6: 12 uTCA crates (120 AMCs, 7680 readout channels)

3x1x1: 4 uTCA crates (20 AMCs, 1280 readout channels)

System operational November 2016-March 2018

Page 22: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

DAQ system of 3x1x1

Page 23: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

For each beam trigger we can have on average 70 cosmics overlapped on the

drift window after the trigger (these cosmics may have interacted with the detector in the

4 ms before the trigger and in the 4 ms after the trigger chopped tracks, “belt conveyor” effect

23

Example of cosmics only event(in one of the views)

Typical event signature for ground surface Liquid Ar TPC operation

Page 24: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

24

Electronics for the Light readout (APC-OMEGA-LAPP)

Finalization in progress of uTCA light readout digitization boards (APC+LAPP) based on Bittware S4AM kit (same card as used in the charge readout demonstrator made at IPNL in 2015), 9 read channels/uTCA board, getting the timing

information from WR and sending data to DAQ via uTCAnetwork

Issue with firmware development: C.Santos (APC) will be leaving in December. Backing up firmware support from LAPP and eventually from IPNL (after completion of CRO and DAQ installation)

Foreseen for the 10 kton: full integration of trigger mezzanine card on final uTCA AMC card derived from the charge readout produced for ProtoDUNE-DP/DUNE

36 cryogenic photomultipliers Hamamatsu R5912-02mod at the bottom of the tank 1 PMT/1m2

Assuming similar granularity 720 PMTs for a 10 kton module

Development (APC-OMEGA) of the trigger card mezzanine based on the PARISROC2 ASIC.

Mezzanine cards tested and produced

Page 25: Status of Charge Readout Electronics and DAQ · Two machines DELL R730 are used ( 384 GB RAM, 2 Intel cards R710 x4 10 Gbit/s per card, 2 Mellanox Connect X3 2 ports, 40Gb/s Ethernet

During spills it is needed a continuous digitization of the light in the +-4 ms around the

trigger time (the light signal is instantaneous and keeps memory of the real arrival time of

the cosmics)

Sampling can be coarse up to 400 ns just to correlate to charge readout

Sum 16 samples at

40MHz to get an effective

2.5 MHz sampling like for

the charge readout

The LRO card has to know

spill/out of spill

Out of spill it can define self-

triggering light triggers when “n”

PMTs are over a certain

threshold and transmit its

time-stamp over the WR25