30
INFN/AExxx, LAL-xxx, SLAC-R-xxx SuperB Detector Technical Design Report Abstract This report describes the technical design detector for SuperB.

SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

INFN/AExxx, LAL-xxx, SLAC-R-xxx

SuperB DetectorTechnical Design Report

Abstract

This report describes the technical design detector for SuperB.

Page 2: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

E. Grauges,Universitat De Barcelona, Fac. Fisica. Dept. ECM Barcelona E-08028, Spain

G. Donvito, V. SpinosoINFN Bari and Universita di Bari, Dipartimento di Fisica, I-70126 Bari, Italy

M. Manghisoni, V. Re, G. TraversiINFN Pavia and Universita di Bergamo Dipartimento di Ingegneria Industriale, I-24129

Bergamo, Italy

G. Eigen, D. Fehlker, L. HelleveUniversity of Bergen, Institute of Physics, N-5007 Bergen, Norway

A. Carbone, R. Di Sipio, A. Gabrielli, D. Galli, F. Giorgi, U. Marconi, S. Perazzini, C. Sbarra,V. Vagnoni, S. Valentinetti, M. Villa, A. Zoccoli

INFN Bologna and Universita di Bologna, Dipartimento di Fisica, I-40127 Bologna, Italy

C. Cheng, A. Chivukula, D. Doll, B. Echenard, D. Hitlin, P. Ongmongkolkul, F. Porter,A. Rakitin, M. Thomas, R. Zhu

California Institute of Technology, Pasadena, California 91125, USA

G. TatishviliCarleton University, Ottawa, Ontario, Canada K1S 5B6

R. Andreassen, C. Fabby, B. Meadows, A. Simpson, M. Sokoloff, K. TomkoUniversity of Cincinnati, Cincinnati, Ohio 45221, USA

A. FellaINFN CNAF I-40127 Bologna, Italy

M. Andreotti, W. Baldini, R. Calabrese, V. Carassiti, G. Cibinetto, A. Cotta Ramusino,A. Gianoli, E. Luppi, E. Luppi, M. Munerato, L. Tomassetti

INFN Ferrara and Universita di Ferrara, Dipartimento di Fisica, I-44100 Ferrara, Italy

D. StokerUniversity of California, Irvine Irvine, California 92697, USA

O. Bezshyyko, G. DolinskaTaras Shevchenko National University of Kyiv Kyiv, 01601, Ukraine

N. Arnaud, C. Beigbeder, F. Bogard, D. Breton, L. Burmistrov, D. Charlet, J. Maalmi,L. Perez Perez, V. Puill, A. Stocchi, V. Tocut, S. Wallon, G. Wormser

Laboratoire de l’Accelerateur Lineaire, IN2P3/CNRS, Universite Paris-Sud 11, F-91898Orsay, France

D. BrownLawrence Berkeley National Laboratory, University of California, Berkeley, California

94720, USA

Page 3: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

A. Calcaterra, R. de Sangro, G. Felici, G. Finocchiaro, P. Patteri, I. Peruzzi, M. Piccolo,M. Rama

Laboratori Nazionali di Frascati dell’INFN, I-00044 Frascati, Italy

S. Fantinel, G. MaronLaboratori Nazionali di Legnaro dell’INFN, I-35020 Legnaro, Italy

E. Ben-Haim, G. Calderini, H. Lebbolo, G. MarchioriLaboratoire de Physique Nucleaire et de Hautes Energies, IN2P3/CNRS, Universite

Pierre et Marie Curie-Paris 6, F-75005 Paris, France

R. Cenci, A. Jawahery, D.A. RobertsUniversity of Maryland, College Park, Maryland 20742, USA

D. Lindemann, P. Patel, S. Robertson, D. SwerskyMcGill University, Montreal, Quebec, Canada H3A 2T8

P. Biassoni, M. Citterio, V. Liberali, F. Palombo, A. Stabile, S. StrackaINFN Milano and Universita di Milano, Dipartimento di Fisica, I-20133 Milano, Italy

A. Aloisio, G. De Nardo, A. Doria, R. Giordano, A. Ordine, S. Pardi, G. Russo, C. SciaccaINFN Napoli and Universita di Napoli Federico II, Dipartimento di Scienze Fisiche,

I-80126, Napoli, Italy

A.Y. Barniakov, M.Y. Barniakov, V.E. Blinov, V.P. Druzhinin, V.B.. Golubev, S.A. Kononov,E. Kravchenko, A.P. Onuchin, S.I. Serednyakov, Y.I. Skovpen, E.P. Solodov

Budker Institute of Nuclear Physics, Novosibirsk 630090, Russia

M. Bellato, M. Benettoni, M. Corvo, A. Crescente, F. Dal Corso, C. Fanin, E. Feltresi,N. Gagliardi, M. Morandin, M. Posocco, M. Rotondo, R. Stroili

INFN Padova and Universita di Padova, Dipartimento di Fisica, I-35131 Padova, Italy

C. Andreoli, L. Gaioni, E. Pozzati, L. Ratti, V. SpezialiINFN Pavia and Universita di Pavia, Dipartimento di Elettronica, I-27100 Pavia, Italy

D. Aisa, M. Bizzarri, C. Cecchi, S. Germani, P. Lubrano, E. Manoni, A. Papi, A. Piluso , A. RossiINFN Perugia and Universita di Perugia, Dipartimento di Fisica, I-06123 Perugia, Italy

M. LebeauINFN Perugia, I-06123 Perugia, Italy, and

California Institute of Technology, Pasadena, California 91125, USA

C. Avanzini, G. Batignani, S. Bettarini, F. Bosi, M. Ceccanti, A. Cervelli, A. Ciampa,F. Crescioli, M. Dell’Orso, D. Fabiani, F. Forti, P. Giannetti, M. Giorgi, S. Gregucci, A. Lusiani,P. Mammini, G. Marchiori, M. Massa, E. Mazzoni, F. Morsani, N. Neri, E. Paoloni, E. Paoloni,

M. Piendibene, A. Profeti, G. Rizzo, L. Sartori, J. Walsh, E. YurtsevINFN Pisa, Universita di Pisa, Dipartimento di Fisica, and Scuola Normale Superiore,

I-56127 Pisa, Italy

Page 4: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

D.M. Asner, J. E. Fast, R.T. Kouzes,Pacific Northwest National Laboratory, Richland, Washington 99352, USA

A. Bevan, F. Gannaway, J. Mistry, C. WalkerQueen Mary, University of London, London E1 4NS, United Kingdom

C.A.J. Brew, R.E. Coath, J.P. Crooks, R.M. Harper, A. Lintern, A. Nichols, M. Staniztki,R. Turchetta, F.F. Wilson

Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, OX11 0QX, United Kingdom

V. Bocci, G. Chiodi, R. Faccini, C. Gargiulo, D. Pinci, L. Recchia, D. RuggieriINFN Roma and Universita di Roma La Sapienza, Dipartimento di Fisica, I-00185 Roma,

Italy

A. Di SimoneINFN Roma Tor Vergata and Universita di Roma Tor Vergata, Dipartimento di Fisica,

I-00133 Roma, Italy

P. Branchini, A. Passeri, F. Ruggieri, E. SpiritiINFN Roma Tre and Universita di Roma Tre, Dipartimento di Fisica, I-00154 Roma, Italy

D. Aston, M. Convery, G. Dubois-Felsmann, W. Dunwoodie, M. Kelsey, P. Kim, M. Kocian,D. Leith, S. Luitz, D. MacFarlane, B. Ratcliff, M. Sullivan, J. Va’vra, W. Wisniewski, W. Yang

SLAC National Accelerator Laboratory Stanford, California 94309, USA

K. Shougaev, A. SofferSchool of Physics and Astronomy, Tel Aviv University Tel Aviv 69978, Israel

F. Bianchi, D. Gamba, G. Giraudo, P. MereuINFN Torino and Universita di Torino, Dipartimento di Fisica Sperimentale, I-10125

Torino, Italy

G. Dalla Betta, G. Fontana, G. SonciniINFN Padova and Universita di Trento, ICT Department, I-38050 Trento, Italy

M. Bomben, L. Bosisio, P. Cristaudo, G. Giacomini, D. Jugovaz, L. Lanceri, I. Rashevskaya,G. Venier, L. Vitale

INFN Triesteand Universita di Trieste, Dipartimento di Fisica, I-34127 Trieste, Italy

R. HendersonTRIUMF Vancouver, British Columbia, Canada V6T 2A3

J.-F. Caron, C. Hearty, P. Lu, R. SoUniversity of British Columbia, Vancouver, British Columbia, Canada V6T 1Z1

P. TarasUniversite de Montreal, Physique des Particules, Montreal, Quebec, Canada H3C 3J7

A. Agarwal, J. Franta, J.M. RoneyUniversity of Victoria, Victoria, British Columbia, Canada V8W 3P6

Page 5: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac
Page 6: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

Contents

1 Electronics, Trigger, DAQ and Online 11.1 Overview of the Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1.1 Trigger Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.2 Trigger Rates and Event Size Estimation . . . . . . . . . . . . . . . . . . . . 21.1.3 Dead Time and Buffer Queue Depth Considerations . . . . . . . . . . . . . . 3

1.2 Trigger and Event Data Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.1 Level-1 Trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.2 Fast Control and Timing System . . . . . . . . . . . . . . . . . . . . . . . . . 51.2.3 Control Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.2.4 Common Front-End-Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . 71.2.5 Data Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.2.6 Read-out Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.2.7 Network Event Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.2.8 High Level Trigger Farm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.2.9 Data Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.3 Support Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.3.1 Experiment Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.3.2 Event Data Quality Monitoring and Display . . . . . . . . . . . . . . . . . . . 111.3.3 Run Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.3.4 Detector Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.3.5 Other Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.3.6 Software Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.4 R&D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.6 Organizational Structure of Electronics, Trigger, DAQ and Online . . . . . . . . . . 13

2 Electronics 172.1 Electronics overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2 Common components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.2.1 Clock, Control and Data Links . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2.2 FCTS Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.3 Data Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.4 Common Front-End Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.5 Power supplies (?) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.6 Grounding and Shielding (?) . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.7 Cable Plant (?) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.3 Subsystem-specific Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.3.1 SVT Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.3.2 DCH Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.3.3 PID Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

v

Page 7: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

2.3.4 EMC Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.3.5 IFR Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.3.6 Level-1 Trigger Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Page 8: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

1 Electronics, Trigger, DAQ and Online

Breton/Marconi/Luitz Pages : 7-10

1.1 Overview of the Architecture

SL + DB + UM Note: To be rewritten in aless tentative and more “absolute” style, i.e.“the design is” but includes lessons learned fromBaBar and LHC

The architecture proposed for the SuperB [1]Electronics, Trigger, Data acquisition and On-line systems (ETD) has evolved from the BABARarchitecture, informed by the experience gainedfrom running BABAR [2] and building the LHCexperiments [3], [4], [5]. The detector side ofthe system is synchronous and all sub-detectorreadouts are now triggered, leading to improvedreliability and uniformity. In SuperB , stan-dard links like Ethernet are the default; customhardware links are only used where necessary.The potential for high radiation levels makes itmandatory to design radiation-safe on-detectorelectronics.

Figure 1.1 shows an overview of the triggerand the event data chain:

A first-level hardware trigger uses dedicateddata streams of reduced primitives from the sub-detectors and provides decisions to the fast con-trol and timing system (FCTS) which is the cen-tralized bandmaster of the system. The FCTSdistributes the clock and the fast commands toall elements of the architecture and controls thereadout of the events in the detector front-endelectronics (FEE). The FEE send event frag-ments to the Reaout Modules (ROMs) whichperform a first stage event build and send thepartially constructed events through a networkevent builder to the processing farm running ahigh-level trigger (HLT). The HLT acts on com-plete events and reduces the data stream to anacceptable rate for permanent logging.

The ETD includes all the elements in thearchitecture: The FCTS, sub-detector-specificand common parts (CFEE) of the front-endelectronics for data readout and control, theLevel 1 hardware trigger, the ROMs, the Ex-periment Control System (ECS), and the vari-ous links that interconnect these components.

The trigger, data acquisition and Online com-ponents of the ETD system are described in thischapter, the common and subdetector-specificcomponents of the electronics are described inthe next chapter.

1.1.1 Trigger Strategy

PB + UM + SLThe BABAR and Belle [6] experiments both

chose to use “open triggers” that preservednearly 100% of BB events of all topologies, anda very large fraction of τ+τ− and cc events. Thischoice enabled very broad physics programs atboth experiments, albeit at the cost of a largenumber of events that needed to be logged andreconstructed, since it was so difficult to reli-ably separate the desired signals from the qq(q = u, d, s) continuum and from higher-masstwo-photon physics at trigger level . The physicsprogram envisioned for SuperB requires veryhigh efficiencies for a wide variety of BB , τ+τ−,and cc events, and depends on continuing thesame strategy, since few classes of the relevantdecays provide the kinds of clear signatures thatallow the construction of specific triggers.

All levels of the trigger system should be de-signed to permit the acquisition of prescaledsamples of events that can be used to measurethe trigger performance.

The trigger system consists of the followingcomponents 1:

1 While at this time we do not foresee a “Level 2” triggerthat acts on partial event information in the datapath, the data acquisition system architecture would

1

Page 9: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

2 1 Electronics, Trigger, DAQ and Online

DetectorFront-EndElectronics

(FEE)

Data QualityMonitoring

IntermediateStorage

Offline Processingand Archive

Read-outModules(ROMs)

DataLinks

Level-1Trigger

Processors(L1T)

L1 Primitives

Fast Controland Timing

System(FCTS)

Clock andCommands

High LevelTrigger Farm

(HLT)

EventBuildingNetwork

TriggerDecisions

Clock

Figure 1.1: Overview of the Trigger and Data Chain

Level 1 (L1) Trigger: A synchronous, fullypipelined L1 trigger receives continuous datastreams from the detector independently of theevent readout and delivers readout decisionswith a fixed latency. While we have yet to con-duct detailed trigger studies, we expect the L1trigger to be similar to the BABAR L1 trigger, op-erating on reduced-data streams from the driftchamber and the calorimeter. We will studythe possibilities of improving the L1 trigger per-formance by including SVT information, takingadvantage of larger FPGAs, faster drift chambersampling, the faster forward calorimeter, andimprovements to the trigger readout granular-ity of the EMC.

High Level Triggers (HLT)—Level 3 (L3) andLevel 4 (L4): The L3 trigger is a software fil-ter that runs on a commodity computer farmand bases its decisions on specialized fast re-construction of complete events. An additional“Level 4” filter may be implemented to reducethe volume of permanently recorded data ifneeded. Decisions by L4 would be based on amore complete event reconstruction and analy-sis. Depending on the worst-case performanceguarantees of the reconstruction algorithms, itmight become necessary to decouple this fil-ter from the near-realtime requirements of L3—hence, its designation as a separate stage.

allow the addition of such a trigger stage at a latertime, hence the nomenclature.

1.1.2 Trigger Rates and Event SizeEstimation

The present L1-accept rate design standard is150 kHz. It has been increased from the SuperB

CDR [1] design of 100 kHz to allow more flexi-bility and add headroom both to accommodatethe possibility of higher backgrounds than de-sign (e.g. during machine commissioning), andthe possibility that the machine might exceedits initial design luminosity of 1036 cm−2sec−1.

The event size estimates still have large un-certainties. Raw event sizes (between front-end electronics and ROMs) are understood wellenough to determine the number of fibres re-quired. However, neither the algorithms thatwill be employed in the ROMs for data sizereduction (such as zero suppression or featureextraction) nor their specific performance forevent size reduction are yet known. Thus,while the 75 kbytes event size extrapolated fromBABAR for the CDR remains our best estimate,the event size could be significantly larger due tonew detector components such as Layer 0 of theSVT and/or the forward calorimeter. In thisdocument we will use 150 kHz L1-acceptrate and 100 kbytes per event as the base-line.

Note: Consider increasing the event size to150-200 kbyte to take into account requirementfor permanent storage of EMC barrel waveforms

With the prospect of future luminosity up-grades up to 4 times the initial design luminos-ity, and the associated increases in event sizeand rate, we also must define the system up-grade path, including which elements need to bedesigned upfront to facilitate such an upgrade,

SuperB Detector Progress Report

Page 10: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

1.2 Trigger and Event Data Chain 3

which can be deferred until a later time, and,ultimately, what the associated costs would be.

1.1.3 Dead Time and Buffer QueueDepth Considerations

SL + DB + UMThe readout system is designed to handle an

average rate of 150 kHz and to absorb the ex-pected instantaneous rates, both without incur-ring dead time of more than 1% under normaloperating conditions at design luminosity. Deadtime is generated and managed centrally by theFCTS which will drop valid L1 trigger requeststhat would not fit into the readout system’s en-velope for handling of average or instantaneousL1 trigger rates. The average rate requirementdetermines the overall system bandwidth; Theinstantaneous trigger rate requirement affectsthe FCTS, the data extraction capabilities ofthe front-end-electronics, and the depth of thede-randomization buffers. The minimum timeinterval between bunch crossings at design lu-minosity is about 2.1 ns—so short in compar-ison to detector data collection times that weassume “continuous beams” for the purposes oftrigger and FCTS design. Therefore, the bursthandling capabilities (minimum time betweentriggers and maximum burst length) to achievethe dead time goal are dominated by the ca-pability of the L1 trigger to separate events intime and by the ability of the trigger and read-out systems to handle events that are partiallyoverlapping in space or time (pile-up, acciden-tals, etc.). Detailed detector and trigger studiesare needed to determine these requirements.

1.2 Trigger and Event Data Chain

The systems in the trigger and event data chainmanage event selection, event filtering and end-to-end data flow from the detector to the inter-mediate buffer.

Note: The previous sentence needs betterwording. We try to avoid the traditional archi-tectural distinction between “TDAQ” (or Odf inBaBar) and “Online”. The idea is to design andpresent the system architecture as an integrated

end-to-end “push” data flow chain all the wayfrom the detector to data logging. A numberof auxiliary sytems supports the primary datalogging function.

1.2.1 Level-1 Trigger

PB (+ SL?)The current baseline for the L1 trigger is to re-

implement the BABAR L1 trigger with state-of-the-art technology. It would be a synchronousmachine running at 59 MHz (or multiples of59 MHz) that processes primitives produced bydedicated electronics located on the front-endboards or other dedicated boards of the re-spective sub-detector. The raw L1 decisionsare sent to the FCTM boards which applies athrottle if necessary and then broadcasts themto the whole experiment. The standard cho-sen for the crates would most likely be eitherATCA for Physics (Advanced Telecommunica-tions Computing Architecture) for the cratesand the backplanes, or a fully custom architec-ture.

The main elements of the L1 trigger areshown in Fig. 1.3 (see [8] for detailed descrip-tions of the BABAR trigger components):

Drift chamber trigger (DCT): The DCT con-sists of a track segment finder (TSF) , a bi-nary link tracker (BLT) and a pt discriminator(PTD).

Electromagnetic Calorimeter Trigger (EMT):The EMT processes the trigger output from thecalorimeter to find clusters.

Global Trigger (GLT): The GLT processorcombines the information from DCT and EMT(and possibly other inputs such as an SVT trig-ger or a Bhabha veto) and forms a final triggerdecision that is sent to the FCTS.

Note: Update this with the current L1 triggerdesign proposal

We will study the applicability of this base-line design at SuperB luminosities and back-grounds, and will investigate improvements,such as adding a Bhabha veto or using SVT in-formation in the L1 trigger. We will also studyfaster sampling of the DCH and the new fast

SuperB Detector Progress Report

Page 11: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

4 1 Electronics, Trigger, DAQ and Online

… L1 processor

Drift Chamber

EMC

SVT

?

Global Level1 Trigger (GLT)

Detector FE Boards

L1 Buffer Ctrl

FE Electronics

pre-selection

Event data

FCTS interface

Tx

Trigger primitives ROMs

Rx ~ 250 Optical links ~ 50 m

DAQ Crate

Crate Control

PCs Farm

Radiation wall

Raw L1 CLK, L1, Sync Cmds

Clk, L1, Sync Cmds

Event fragments

Full Events

Clk, L1, Sync Cmds

Throttle

Field Bus

Detector Safety System Ethernet

Throttle

Ethernet

… FCTS

FEE models

throttling

L1 processor

L1 processor

Ethernet

ECS interface

Subdetector Specific

Electronics

Ethernet

Ethernet

ctrl ECS

Ethernet

Ethernet

L3 to L5

~ 400 ~ 400

~ 35

~ 80

~15

Figure 1.2: Overview of Level-1 Trigger, FCTS, FEE and ROMs.Note: Figure needs to be updated!

Calorimeter

Trigger

Processor

SVT Trigger ?

SVT FEE

EMC FEE

Drift Chamber Track

Segment Finder (TSF)

DCH FEEDrift Chamber Binary

Link Tracker (BLT)

Drift Chamber

pT Discriminator (PTD)

Global Level 1

Trigger ( GLT)

Fast Control System

Bhabha Veto ?

Figure 1.3: Level 1 Trigger Overview

forward calorimeter. In particular for the barrelEMC we will need to study how the L1 triggertime resolution can be improved and the triggerjitter can be reduced compared to BaBar. Ingeneral, improving the trigger event time pre-cision should allow a reduction in readout win-dow and raw event size. The L1 trigger mayalso be improved using larger FPGAs (e.g. byimplementing tracking or clustering algorithmimprovements, or by exploiting better readoutgranularity in the EMC).

L1 Trigger Latency: The BABAR L1 triggerhad 12µs latency. However, since the size, andcost, of the L1 data buffers in the sub-detectorsscale directly with trigger latency, it should besubstantially reduced, if possible. L1 trigger la-tencies of the much larger, more complex, AT-LAS, CMS and LHCb experiments range be-tween 2 and 4µs, however these experimentsonly use fast detectors for triggering. Takinginto consideration that the DCH adds an in-trinsic dead time of about 1µs and adding somelatency reserve for future upgrades, we are cur-rently estimating a total trigger latency of 6µs(or less). More detailed engineering studies willbe required to validate this estimate.

Monitoring the Trigger: To debug and moni-tor the trigger, and to provide cluster and trackseed information to the higher trigger levels,trigger information supporting the trigger deci-sions is read out on a per-event basis through

SuperB Detector Progress Report

Page 12: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

1.2 Trigger and Event Data Chain 5

the regular readout system. In this respect,the low-level trigger acts like just another sub-detector.

1.2.2 Fast Control and Timing System

DC + CB + DB + a little bit of SLThe Fast Control and Timing System (FCTS,

Fig. 1.4) manages all elements linked to clock,trigger, and event readout, and is responsiblefor partitioning the detector into independentsub-systems for testing and commissioning.

The FCTS will be implemented in a cratewhere the backplane can be used to distributeall the necessary signals in point-to-point mode.This permits the delivery of very clean syn-chronous signals to all boards—avoiding the useof external cables. The Fast Control and Tim-ing Module (FCTM, shown in Fig. 1.5) providesthe main functions of the FCTS:

Clock and Synchronization: The FCTS syn-chronizes the experiment with the machine andits bunch pattern, distributes the clock through-out the experiment, buffers the clock, and gen-erates synchronous reset commands.

Note: We somewhere need to discuss “syn-chronization” in more detail, here some poten-tial topics:

• Fixed-latency implementation in FPGA(verify) - can’t forget this!

• Timing of the various delays during com-missioning so that the analog signals areplaced correctly within the readout win-dows. Where is the adjustable delay? Onthe sender side or the receiver side (i.e.sub-detector). DC proposes receiver (it’sa per-subdetector thing. Also could poten-tiall adjust latency buffer depth in FEE +adjust phase). What is the hardware sup-port needed in CFEE to measure the offset.Is there acommon implementation? Prob-ably not, FEE will have to deal with thisproblem.

• The ability to reset all clock dividers andstate machines in the (C)FEE at systeminitialization to ensure that divided ver-sions of the 59MHz global clock and state

machines are also “in phase”. This proba-bly needs some sort of a “reset” or “sync”command to be broadcast to all FEE in apartition via the clock/command link.

• Protocol to detect, report and recover from“loss of lock” in CFEE. Return path ofcommand link (also used for fast throttle)for reporting? Can we recover an individi-ual channel / channel group or do we needto globally reset / resync the detector?

Trigger Handling: The FCTS receives the rawL1 trigger decisions, throttles them as necessary,and broadcasts them to the sub-detectors.

Note: We should somewhere have a brief dis-cussion of the command word length and con-tent. To consider:

• Number of bits available in the commandword (quite limited on a 1 GBit link giventhe dead time constraints. 2GBit/s wouldbe much more comfortable!)

• content of the command word and relatedissues: Do we broadcast information de-rived from the trigger type (probably atleast a few bits)? If yes, how many bits?The full trigger information can be includedin the event by reading out the L1 trig-ger processors. How many bits or theHLT node address (10-12 is probably aboutright)? How many bits for an event tag?How to we reconstitute / construct a glob-ally unique event identifier at the ROMlevel if we don’t have enough bits availablein the command word? Separate link? Eth-ernet? ...?

• Can we “resuse” the HLT node address aspart of the event tag (thus saving somebits?) Probably not, since we need to sup-port running with very small number ofHLT nodes.

Note: We need a discussion of the implemen-tation of a fast throttle. Some issues

• The FCTS needs one bit per subdetectorto do the fast throttle

SuperB Detector Progress Report

Page 13: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

6 1 Electronics, Trigger, DAQ and Online

SuperB L1 Trigger

FCTM FCTM

L1

Clo

ck

Local Trigger (optional)

L1

Clo

ck

SVT Splitter

Throttle OR/Switch FCTS Switch

DCH Splitter

PID Splitter

EMC Splitter

IFR Splitter

SVT Splitter

DCH Splitter

PID Splitter

EMC Splitter

IFR Splitter

L1

Clo

ck

Clock Fanout RF

PC farm

SVT

ROM

DCH

ROM

PID

ROM

EMC

ROM

IFR

ROM

SVT

FE

DCH

FE

PID

FE

EMC

FE

IFR

FE

Clock + commands

L1T Splitter

Figure 1.4: Fast Control and Timing System

• Aggregation of throttles from differentfront-end boards within one subdetector isnot subdetector-specific

• For diagnostic purposes it is mandatorythat we record which front-end board throt-teled when (“Logic analyzer on the throttlelines”)

• need to define throttle latency (depends onderandomizer depth) - should be as fast aspossible to be able to utilize the derandom-izer as efficiently as possible (i.e. where dowe set the “almost full” level?)

Calibration and Commissioning: The FCTScan trigger the generation of calibration pulsesand flexibly programmable local triggers for cal-ibration and commissioning.

Note: How do we handle the EMC source cal-ibration where by definition a meaningful globalcalibration trigger can not be constructed.

Event Handling: The FCTS generates eventidentifiers, manages the event routing, and dis-

tributes event routing information to the ROMs.It also keeps a trace of all of its activity, includ-ing an accounting of triggers lost due to deadtime or other sources of throttling and event-linked data that needs to be included with thereadout data.

The FCTS crate includes as many FCTMboards as required to cover all partitions. OneFCTM will be dedicated to the unused sub-systems in order to provide them with the clockand the minimum necessary commands.

Two dedicated switches are required in orderto be able to partition the system into indepen-dent sub-systems or groups of sub-systems. Oneswitch distributes the clock and commands tothe front-end boards, the other collects throt-tling requests from the readout electronics orthe ECS. These switches can be implemented ondedicated boards, connected with the FCTMs,and need to receive the clock. To reduce thenumber of connections between ROM crates and

SuperB Detector Progress Report

Page 14: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

1.2 Trigger and Event Data Chain 7

ECS Interface

Trigger

Generator

Trigger rate

controller

Trigger Type

Command

Broadcaster

Link encoderEvent-linked data

Throttles ECS L1 Clock

To ROM To FEE and ROM

IP destination

Broadcaster

Ethernet

FCTM

Figure 1.5: Fast Control and Timing Module

the global throttle switch board, throttle com-mands could be combined at the ROM cratelevel before sending them to the global switch.

Note: Describe fast throttle (from FEE toFCTS) here.

The FCTM also manages the distribution ofevents to the HLT farm for event building, de-ciding the destination farm node for every event.There are many possible implementations of theevent building network protocol and the rout-ing of events based on availability of HLT farmmachines, so at this point we can provide only ahigh-level description. We strongly prefer to usethe FCTS to distribute event routing informa-tion to the ROMs because it is simple and pro-vides natural synchronization. Management ofevent destinations and functions such as band-with management for the event building net-work or protocols to manage the event distri-bution based on the availability of farm serverscan then be implemented in FCTM firmwareand/or software.

“Continuation events” to deal with pile-upcould either be reconstituted the ROMs or inthe high-level trigger farm, but we strongly pre-fer to reconstitute them in the ROMs. Doingthis in the trigger farm would complicate theevent builder and require the FCTS to main-tain event state and adjust the event routingto send all parts of a continuation event to thesame HLT farm node.

1.2.3 Control Links

AANote: per agreement at Fall 2011 CERN

workshop

1.2.4 Common Front-End-Electronics

DB + ?

FE Boards

L1 Buffer Ctrl

FE Electronics

pre-selection

Data from subdetector

Trigger primitives to L1 processors

Optical links ~ 50 m

Event fragments to ROM

FCTS ECS

Tx

FCTS interface

ECS interface

Subdetector Specific

Electronics

Figure 1.6: Common Front-End Electronics

Common Front-End Electronics (CFEE) de-signs and components allow us to exploit thecommonalities between the sub-detector elec-tronics and avoid separate design and imple-mentation of common functions for each sub-detector.

In our opinion, the separate functions re-quired to drive the FEE should be implementedin dedicated independent elements. These el-ements will be mezzanines or circuits directlymounted on the front-end modules (which act ascarrier boards) and will be standardized acrossthe sub-systems as much as possible. For in-stance, as shown in Fig. 1.6, one mezzanine canbe used for FCTS signal and command decod-ing, and one for ECS management. To reducethe number of links, it may be possible to decodethe FCTS and ECS signals on one mezzanineand then distribute them to the neighbouringboards.

A common dedicated control circuitry insidea radiation-tolerant FPGA may also drive theL1 buffers. It would handle the L1 acceptcommands and provide the signals necessaryto manage the data transfers between latencybuffers, derandomizer buffers and the fast mul-tiplexers feeding the optical link serializers. Ifrequired by the system design, it would also pro-vide logic for special treatment of pile-up events

SuperB Detector Progress Report

Page 15: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

8 1 Electronics, Trigger, DAQ and Online

and/or extending the readout window back intime after a Bhabha event has been rejected.

The latency buffers can be implemented ei-ther in the same FPGA or directly on the carrierboards. One such circuit can drive numerousdata links in parallel, thus reducing the amountof electronics on the front-end.

One intriguing, possible advantage of this ap-proach is that analog L1 buffers might be imple-mented in an ASIC, though the analog output ofthe ASIC then must be able to drive an internalor external ADC that samples the signal.

Serializers and optical link drivers will alsoreside on carrier boards, mainly for mechanicaland thermal reasons. Fig. 1.6 shows a possibleimplementation of the L1 buffers, their controlelectronics (in a dedicated FPGA), and theiroutputs to the optical readout links.

Note: Serializer clock not necessarily identicalto front-end clock. Derandomizer buffer inputsynchronized with latency buffer, output runs atlink clock speed. This allows flexible choice andupgrade of data link technology. Derandomizercould be located on link carrier board.

All (rad-tolerant) FPGAs in the FEE haveto be reprogrammable without dismounting aboard. This could be done through dedicatedfront panel connectors, which might be linked tonumerous FPGAs, but it would be preferable ifthe reprogramming could be done through theECS without any manual intervention on thedetector side.

Sampling of the analog signals in the FEEs isdone with the global clock or a clock signal de-rived from the global clock (typically by divid-ing it down). To maintain the timing requiredby the fixed latency design, the latency buffersin the FEEs must be read with the same sam-pling frequency as they are written. In addition,when initializing the FEE boards, care must betaken that all dividers are reset synchronouslywith those of the first level trigger (by a globalsignal) in order to maintain a constant phasebetween them.

1.2.5 Data Links

AA

Note: per agreement at Fall 2011 CERNworkshop

1.2.6 Read-out Modules

MB + UM

Figure 1.7: Readout Module

The Readout Modules (ROM, Fig. 1.7) re-ceive event fragments from the sub-detectors’front-end electronics, tag them with front-endidentifiers and absolute time-stamps, bufferthem in de-randomizing memories, perform pro-cessing (still to be defined) on the fragmentdata, and eventually inject the formatted frag-ment buffers into the event builder and HLTfarm. Connected to the front-end electronics viaoptical fibres, they will be located in an easilyaccessible, low radiation area.

A modular approach will maximize standard-ization across the system to simplify develop-ment and keep costs low—different sub-detectorrequirements can then be accommodated by us-ing sub-detector-specific “personality modules”.

On the ROM boards, signals from opticalreceivers mounted on mezzanine cards will berouted to the de-serializers (in commercial FP-GAs) where data processing can take place.Special requirements from the sub-detector sys-tems will be accommodated by custom-builtmezzanines mounted on common carriers. Oneof the mezzanine sites on the carrier will host aninterface to the FCTS to receive global timingand trigger information. The carrier itself will

SuperB Detector Progress Report

Page 16: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

1.2 Trigger and Event Data Chain 9

host memory buffers and 1 Gbit/s or 10 Gbits/slinks to the event building network.

A baseline of 8 optical fibres per card cur-rently seems like a good compromise betweenkeeping the number of ROM boards low andadding to their complexity. This density is suf-ficient so that there needs to be only one ROMcrate per sub-detector, and corresponds nicelyto the envisaged FCTS partitioning.

1.2.7 Network Event Builder

SL + GMNote: To be rewritten — add more detail

about proposed node management and load bal-ancing protocol — update the numbers

Assuming a L1 trigger rate of 150 kHz andan event size of 75 kbytes, the bandwidth inthe network event builder is about 12 Gbytes/s,which corresponds to about 120 Gbits/s withnetwork overhead included. It seems prudentto retain an additional safety factor of ˜2, giventhe event size uncertainty and the immaturityof the overall system design. Thus, we will take250 Gbits/s as the baseline for the Online sys-tem input bandwidth.

The ROMs read out event fragments in par-allel from sub-detector front-end electronics—buffering the fragments in deep de-randomizingmemories. The event-related information isthen transferred into the ROM memories, andsent over a network to an event buffer in one ofthe machines of the HLT farm. This collectiontask, called event building, can be performed inparallel for multiple events, thanks to the depthof the ROM memories and bandwidth of theevent building network switch (preferably non-blocking). Because of this inherent parallelism,the building rate can be scaled up as needed (upto the bandwidth limit of the event building net-work). We expect to use 10 Gbits/s Ethernet asthe basic technology of the event builder net-work.

1.2.8 High Level Trigger Farm

SLThe HLT farm needs to provide sufficient ag-

gregate network bandwidth and CPU resourcesto handle the full Level 1 trigger rate on its in-

put side. The Level 3 trigger algorithms shouldoperate and log data entirely free of event timeordering constraints and be able to take full ad-vantage of modern multi-core CPUs. Extrapo-lating from BABAR, we expect 10 ms core timeper event to be more than adequate to im-plement a software L3 filter, using specializedfast reconstruction algorithms. With such a fil-ter, an output cross-section of 25 nb should beachievable.

To further reduce the amount of permanentlystored data, an additional filter stage (L4) couldbe added that acts only on events accepted bythe L3 filter. This L4 stage could be an equiva-lent (or extension) of the BABAR offline physicsfilter—rejecting events based either on partialor full event reconstruction. If the worst-casebehavior of the L4 reconstruction code can bewell controlled, it could be run in near real-timeas part of, or directly after, the L3 stage. Oth-erwise, it may be necessary to use deep buffer-ing to decouple the L4 filter from the near real-time performance requirements imposed at theL3 stage. The discussion in the SuperB CDR[1] about risks and benefits of a L4 filter stillapplies.

1.2.9 Data Logging

SL + ?The output of the HLT is logged to disk stor-

age. We assume at least a few Tbytes of us-able space per farm node, implemented eitheras directly attached low-cost disks in a redun-dant (RAID) configuration, or as a storage sys-tem connected through a network or SAN. Wedo not expect to aggregate data from multiplefarm nodes into larger files. Instead, the indi-vidual files from the farm nodes will be main-tained in the downstream system and the book-keeping system and data handling procedureswill have to deal with missing run contributionfiles. A switched Gigabit Ethernet network sep-arate from the event builder network is used totransfer data asynchronously to archival storageand/or near-online farms for further processing.It is not yet decided where such facilities willbe located, but network connectivity with ade-quate bandwidth and reliability will need to be

SuperB Detector Progress Report

Page 17: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

10 1 Electronics, Trigger, DAQ and Online

provided. Enough local storage must be avail-able to the HLT farm to allow data buffering forthe expected periods of link down-time.

While the format for the raw data has yet tobe determined, many of the basic requirementsare clear, such as efficient sequential writing,compact representation of the data, portabil-ity, long-term accessibility, and the freedom totune file sizes to optimize storage system per-formance.

1.3 Support Systems

1.3.1 Experiment Control System

Author to be assignedThe complete SuperB experiment (power

supplies, front-end, DAQ, etc.) must becontrolled by an Experiment Control System(ECS). As shown in Fig. 1.1, the ECS is respon-sible both for controlling the experiment and formonitoring its functioning.

Configuring the Front-ends: Many front-endparameters must be initialized before the sys-tem can work correctly. The number of param-eters per channel can range from a only a few tolarge per-channel lookup tables. The ECS mayalso need to read back parameters from registersin the front-end hardware to check the statusor verify that the contents have not changed.For a fast detector configuration and recoveryturnaround in factory mode, it is critical to nothave bottlenecks either in the ECS itself, orin the ECS’ access to the front-end hardware.If technically feasible and affordable, front-endelectronics on or near the detector should beshielded or engineered to avoid frequent pa-rameter reloads due to radiation-induced singleevent upsets—reconfiguring through the ECSshould only be considered as a last resort.

Calibration: Calibration runs require ex-tended functionality of the ECS. In a typicalcalibration run, after loading calibrationparameters, event data collected with theseparameters must be sent through the DAQsystem and analyzed. Then the ECS must load

the parameters for the next calibration cycleinto the front-ends and repeat.

Testing the FEE: The ECS may also be usedto remotely test all FEE electronics modules us-ing dedicated software. This obviates the needfor independent self-test capability for all mod-ules.

Monitoring the Experiment: The ECS contin-uously monitors the entire experiment to insurethat it functions properly. Some examples in-clude (1) independent spying on event data toverify data quality, (2) monitoring the powersupplies (voltage, current limits, etc.), and (3)monitoring the temperature of power supplies,crates, and modules. Support for monitoringthe FEE modules themselves must be built intothe FEE hardware so that the ECS can be in-formed about FEE failures. The ECS also actsas a first line of defense in protecting the exper-iment from a variety of hazards. In addition,an independent, hardware-based detector safetysystem (part of the Detector Control System,see 1.3.4) must protect the experiment againstequipment damage in case the software-basedECS is not operating correctly.

The specific requirements that each of thesub-systems makes on ECS bandwidth andfunctionality must be determined (or at leastestimated) as early as possible so that the ECScan be designed to incorporate them. Develop-ment of calibration, test, and monitoring rou-tines must be considered an integral part ofsub-system development, as it requires detailedknowledge about sub-system internals.

Possible ECS Implementation: The field busused for the ECS has to be radiation tolerant onthe detector side and provide very high reliabil-ity. Such a bus has been designed for the LHCbexperiment: it is called SPECS (Serial Proto-col for Experiment Control System) [7]. It is abidirectional 10 Mbits/s bus that runs over stan-dard Ethernet Cat5+ cable and provides all pos-sible facilities for ECS (like JTAG (Joint TestAction Group) and I2C (Inter IC)) on a smallmezzanine. It could be easily adapted to theSuperB requirements. Though SPECS was ini-tially based on PCI boards, it is currently being

SuperB Detector Progress Report

Page 18: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

1.3 Support Systems 11

translated to an Ethernet-based system, as partof an LHCb upgrade, also integrating all thefunctionalities for the out-of-detector elements.

For the electronics located far from the detec-tor, Ethernet will be used for ECS communica-tion.

1.3.2 Event Data Quality Monitoringand Display

SLEvent data quality monitoring is based on

quantities calculated by the L3 (and possiblyL4) trigger, as well as quantities calculated bya more detailed analysis on a subset of thedata. A distributed histogramming system col-lects the monitoring output histograms from allsources and makes them available to automaticmonitoring processes and operator GUIs.

1.3.3 Run Control System

GMThe control and monitor of the experiment is

performed by the Run Control System (RCS),providing a single point of entry to operate andmonitor the entire experiment. It is a collectionof software and hardware modules that handlethe two main functions of this component: con-trolling, configuring, and monitoring the wholeOnline system, and providing its user interface.The RCS interacts both with the ExperimentControl System (ECS) and with the DetectorControl System (DCS). We expect the RCS toutilize modern web technologies.

1.3.4 Detector Control System

SLThe Detector Control System (DCS) is re-

sponsible for ensuring detector safety, control-ling the detector and detector support system,and monitoring and recording detector and en-vironmental conditions.

Efficient detector operations in factory moderequire high levels of automation and automaticrecovery from problems. The DCS plays a keyrole in maintaining high operational efficiency,and tight integration with the Run Control Sys-tem is highly desirable.

Low-level components and interlocks respon-sible for detector safety (Detector Safety Sys-tem, DSS) will typically be implemented as sim-ple circuits or with programmable logic con-trollers (PLCs).

The software component will be built on topof a toolkit that provides the interface to what-ever industrial buses, sensors, and actuatorsmay be used. It must provide a graphical userinterface for the operator, have facilities to gen-erate alerts automatically, and have an archiv-ing system to record the relevant detector infor-mation. It must also provide software interfacesfor programmatic control of the detector.

We expect to be able to use existing com-mercial products and controls frameworks de-veloped by the CERN LHC experiments.

1.3.5 Other Components

SL

Electronic Logbook: A web-based logbook,integrated with all major Online components,allows operators to keep an ongoing log of theexperiment’s status, activities and changes.

Databases: Online databases such as config-uration, conditions, and ambient databases areneeded to track, respectively, the intended de-tector configuration, calibrations, and actualstate and time-series information from the DCS.

Configuration Management: The configura-tion management system defines all hardwareand software configuration parameters, andrecords them in a configuration database.

Performance Monitoring: The performancemonitoring system monitors all components ofthe Online.

Software Release Management: Strict soft-ware release management is required, as is atracking system that records the software ver-sion (including any patches) that was runningat a given time in any part of the ETD/Onlinesystem. Release management must cover FP-GAs and other firmware as well as software.

Computing Infrastructure Reliability: TheOnline computing infrastructure (including thespecialized and general-purpose networks, file,

SuperB Detector Progress Report

Page 19: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

12 1 Electronics, Trigger, DAQ and Online

database and application servers, operator con-soles, and other workstations) must be de-signed to provide high availability, while beingself-contained (sufficiently isolated and providedwith firewalls) to minimize external dependen-cies and downtime.

1.3.6 Software Infrastructure

GM + SLThedata acquisition and online system is ba-

sically a distributed system built with com-modity hardware components. Substantialmanpower will be needed to design the soft-ware components—taking a homogeneous ap-proach in both the design and implementa-tion phases. An Online software infrastructureframework will help organize this major under-taking. It should provide basic memory man-agement, communication services, and the ex-ecutive processes to execute the Online applica-tions. Specific Online applications will make useof these general services to simplify the perfor-mance of their functions. Middleware designedspecifically for data acquisition exists, and mayprovide a simple, consistent, and integrated dis-tributed programming environment.

1.4 R&D

For the overall ETD/Online system, substan-tial R&D is needed to better understand theglobal system requirements, develop solutions,and probe the possible upgrade paths to handleluminosities of up to 4× 1036 cm−2sec−1 duringthe lifetime of the experiment.

Data Links: The data links for SuperB requireR&D in the following areas: (1) studying jitterrelated issues and filtering by means of jittercleaners; (2) coding patterns for effective errordetection and correction: (3) radiation qualifi-cation of link components; and (4) performancestudies of the serializers/de-serializers embed-ded in the new generation of FPGAs (Virtex6,Xilinx, etc.)

Readout Module: Readout Module R&D in-cludes investigation of 10 Gbits/s Ethernet tech-

nology, and detailed studies of the I/O sub-system on the ROM boards. The possibilityof implementing the ROM functions in COTScomputers by developing suitable PCIe boards(such as optical link boards for FCTS and FEElinks, or personality cards to implement sub-detector-specific functions) should also be inves-tigated.

Trigger: For the L1 trigger, the achievableminimum latency and physics performance willneed to be studied. The studies will need to ad-dress many factors including (1) improved timeresolution and trigger-level granularity of theEMC and a faster DCH than BABAR; (2) poten-tial inclusion of SVT information at L1; (3) thepossibility of a L1 Bhabha veto; (4) possibilitiesfor handling pile-up and overlapping (spatiallyand temporally) events at L1; and (5) oppor-tunities created by modern FPGAs to improvethe trigger algorithms.

For the HLT, studies of achievable physicsperformance and rejection rates need to be con-ducted, including the risks and benefits of a pos-sible L4 option.

ETD Performance and Dead Time: The de-sign parameters for the ETD system are drivenby trigger rates and dead time constraints, andwill need to be studied in detail to determinethe requirements for (1) trigger distributionthrough the FCTS, (2) the FEE/CFEE buffersizes, and (3) for handling pile-up and overlap-ping events. Input from the L1 trigger R&Dand from background simulation studies will berequired.

Event Builder and HLT Farm: The mainR&D topics for the Event Builder and HLTFarm are (1) the applicability of existing toolsand frameworks for constructing the eventbuilder; (2) the HLT farm framework; and (3),event building protocols and how they map ontonetwork hardware.

Software Infrastructure: To provide the mostefficient use of resources, it is important to in-vestigate how much of the software infrastruc-ture, frameworks and code implementation can

SuperB Detector Progress Report

Page 20: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

1.5 Conclusions 13

be shared with Offline computing. This re-quires us to determine the level of reliability-engineering required in such a shared approach.We also must develop frameworks to take ad-vantage of multi-core CPUs.

1.5 Conclusions

The architecture of the ETD system for Su-perB is optimized for simplicity and reliabilityat the lowest possible cost. It builds on sub-stantial in-depth experience with the BABAR ex-periment, as well as more recent developmentsderived from building and commissioning theLHC experiments. The proposed system is sim-ple and safe. Trigger and data readout are fullysynchronous—allowing them to be easily under-stood and commissioned. Safety margins arespecifically included in all designs to deal withuncertainties in backgrounds and radiation lev-els. Event readout and event building are cen-

trally supervised by a FCTS system which con-tinuously collects all the information necessaryto optimize the trigger rate. The hardware trig-ger design philosophy is similar to that of BABARbut with better efficiency and smaller latency.The event size remains modest.

The Online design philosophy is similar—leveraging existing experience, technology, andtoolkits developed by BABAR, the LHC experi-ments, and commercial off-the-shelf computingand networking components—leading to a sim-ple and operationally efficient system to servethe needs of SuperB factory-mode data taking.

1.6 Organizational Structure ofElectronics, Trigger, DAQand Online

DB + UM + SL

SuperB Detector Progress Report

Page 21: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

14 1 Electronics, Trigger, DAQ and Online

SuperB Detector Progress Report

Page 22: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

Bibliography

[1] M. Bona et al., SuperB: A High-LuminosityHeavy Flavour Factory. Conceptual De-sign Report, arXiv:0709.0451v2 [hep-ex],INFN/AE-07/2, SLAC-R-856, LAL 07-15,also available at http://www.pi.infn.it/

SuperB/CDR.

[2] B. Aubert et al. (BABAR Collaboration), TheBABAR Detector, Nucl. Instrum. Methods Phys.Res., Sect. A 479, 1 (2002) [arXiv:hep-ex/0105044].

[3] The ATLAS Collaboration, ATLAS Detec-tor and Physics Performance Technical DesignReport, http://atlas.web.cern.ch/Atlas/

GROUPS/PHYSICS/TDR/access.html.

[4] The CMS Collaboration, CMS Detector Tech-nical Design Report, http://cmsdoc.cern.

ch/cms/cpt/tdr/.

[5] The LHCB Collaboration, LHCB Techni-cal Design Reports, http://cmsdoc.cern.ch/cms/cpt/tdr/.

[6] The Belle Collaboration, The Belle Detec-tor, Nucl. Instrum. Methods Phys. Res., Sect.A 479, 117 (2002).

[7] The SPECS Web Page, https://lhcb.lal.

in2p3.fr/Specs/.

[8] The BABAR Trigger Web Pages,http://www.slac.stanford.edu/BFROOT/

www/Detector/Trigger/index.html.

15

Page 23: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

16 Bibliography

SuperB Detector Progress Report

Page 24: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

2 Electronics

Breton/Marconi/Luitz Pages ? 12-15 pages

2.1 Electronics overview

The general design approach is to standard-ize components across the system as much aspossible, to use mezzanine boards to isolatesub-system-specific functions differing from thestandard design, and to use commercially avail-able common off-the-shelf (COTS) componentswhere viable.

2.2 Common components

2.2.1 Clock, Control and Data Links

- To be updated to CERN workshop 11/2011outcome

Designing and validating the different seriallinks required for SuperB (for data transmis-sion, timing, and control commands distribu-tion and read-out) will require substantial effortduring the TDR phase. Because of fixed latencyand low jitter constraints, simple solutions rely-ing on off-the-shelf electronics components mustbe thoroughly tested to validate them for use inclock and control links. Moreover, because radi-ation levels on the detector side are expected tobe high, R&D will be necessary to qualify the se-lected chip-sets for radiation robustness. Sincerequirements for the various link types differ,technical solutions for different link types mayalso differ.

The links are used to distribute the frequency-divided machine clock (running at 59 MHz) andfast control signals such as trigger pulses, bunchcrossing, and event IDs or other qualifiers toall components of the ETD system. Copperwires are used for short haul data transmission

(< 1m), while optical fibres are used for mediumand long haul. To preserve timing information,suitable commercial components will be cho-sen so that the latency of transmitted data andthe phase of the clock recovered from the serialstream do not change with power cycles, resets,and loss-of-locks. Encoding and/or scramblingtechniques will be used to minimize the jitteron the recovered clock. The same link archi-tecture is also suitable for transmitting regulardata instead of fast controls, or a combinationof both.

Link types can be divided into two classes:

A-Type: The A-type links are homogeneouslinks with both ends off-detector. Given the ab-sence of radiation, they might be implementedwith Serializer-De-serializers (SerDes) embed-ded in FPGAs (Field Programmable Gate Ar-rays). Logic in the FPGA fabric will be usedto implement fixed latency links and to en-code/decode fast control signals. A-Type linksare used to connect the FCTS system to theDAQ crate control and to the Global Level 1Trigger. A-Type links run at approximately2.2 Gbits/s.

B-Type: The B-type hybrid links have one endon-detector and the other end off-detector. Theon-detector side might be implemented withoff-the-shelf radiation-tolerant components—the off-detector end might still be implementedwith FPGA-embedded SerDes. B-Type linksconnect the FCTS crate to the FEE and theFEE to ROMs. The B-Type link speed might belimited by the off-the-shelf SerDes performance,but is expected to be at least 1 Gbit/s for theFCTS to FEE link and about 2 Gbits/s for theFEE to ROM link.

All links can be implemented as plug-inboards or mezzanines, (1) decoupling the de-velopment of the user logic from the high-speed

17

Page 25: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

18 2 Electronics

link design, (2) simplifying the user board lay-out, and (3) allowing an easy link upgrade with-out affecting the host boards. Mezzanine spec-ifications and form-factors will likely be differ-ent for A-Type and B-Type links, but they willbe designed to share a common interface to thehost board to the maximum possible extent.

2.2.2 FCTS Links

2.2.3 Data Links

2.2.4 Common Front-End Electronics

2.2.5 Power supplies (?)

2.2.6 Grounding and Shielding (?)

2.2.7 Cable Plant (?)

2.3 Subsystem-specificElectronics

7-10 pages

2.3.1 SVT Electronics

The SVT electronics shown in Fig. 2.1 is de-signed to take advantage, where possible, ofthe data-push characteristics of the front-endchips. The time resolution of the detector isdominated by the minimal time resolution ofthe FSSR2 chip, which is 132 ns. Events arebuilt from packets of three minimal time slices(396 ns event time window). The readout chainin layer 0 starts from a half-module holding two

On detectorHigh rad area

Off detectorlow rad area

optical 1Gbit/s Links

FEB

Line drivers

HDISi Wafers

Data

Power/Signal

Front-end chips

On detectorHigh rad area

Off detectorlow rad area

Optical Link2.5 Gbit/sTo ROM

Optical 1Gbit/s RAM and L1 logic

Lay

er

0

FEB

Buffers andline drivers

CopperLink

Half module32x

Lay

er

1-5

Optical Link2.5 Gbit/sTo ROM

Figure 2.1: SVT Electronics

sets of pixel chips (2 readout sections, ROS).Data are transferred on copper wires to boardslocated a few meters away from the interac-tion region where local buffers will store theread hits. As discussed in the SVT chapter,for layer 0, the data rate is dominated by thebackground. The bandwidth needed is about16 Gbits/s/ROS. This large bandwidth is themain reason to store hits close to the detectorand transfer only hits from triggered events.

For events accepted by the L1 trigger, thebandwidth requirement is only 0.85 Gbits/s anddata from each ROS can be transferred on op-tical links (1 Gbit/s) to the front-end boards(FEB) and then to ROMs through the stan-dard 2 Gbits/s optical readout links. Layers 1-5are read out continuously with the hits beingsent to the front-end boards on 1 Gbit/s opti-cal links. On the FEBs, hits are sorted in timeand formatted to reduce event size (timestampstripping). Hits of triggered events are then se-lected and forwarded to the ROMs on 2 Gbits/sstandard links.

Occupancies and rates on layers 3-5 shouldbe low enough to make them suitable for fasttrack searching so that SVT information couldbe used in the L1 trigger. The SVT could pro-vide the number of tracks found, the number oftracks not originating from the interaction re-gion, and the presence of back-to-back eventsin the φ coordinate. A possible option for SVTparticipation to the L1 trigger would require twoL1 trigger processing boards each one linked tothe FEBs of layers 3-5 with synchronous opticallinks.

In total, the SVT electronics requires 58FEBs and 58 ROMs, 58 optical links at2 Gbits/s, 308 links at 1 Gbit/s (radiation hard)and, optionally, two L1 trigger processingboards and about 40 links at 1.25 Gbits/s forL1 trigger processing.

2.3.2 DCH Electronics

The design is still in a very early stage, so weonly provide a baseline description of the driftchamber front-end electronics. It does not in-clude additional front-end features currently un-

SuperB Detector Progress Report

Page 26: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

2.3 Subsystem-specific Electronics 19

192 ADB

48 ROIB

4 DIOM

48 chs8 bytes/ch

384 chs8 bytes/ch

92 Mbits/sec (sparse data scan)

92 Mbits/sec (FEX)

OL - 2 Gbits/sec

DIGITIZINGBOARDS

12 chs32 bytes/ch

SPARSEDATA SCAN

&FEX

DATA LINKs

4 DCSIO

ROIB #1

DCS

ADB#1

ADB#4

ROIB #12

DCS48 chs

8 bytes/ch

DIOM #1

DCSIO

DIOM #4

DCSIO

DCS LINKs

ADB = Analog to Digital BoardsROIB = ReadOut Interface BoardsDCS = Detector Control SystemDIOM = Data I/O ModuleDCSIO = DCS I/O Module

OL - 2 Gbits/sec

OL - 2 Gbits/sec

OL - 2 Gbits/sec

(a) Data Readout Path

48  chs  1  bit/ch  

144  chs  1  bit  /ch  

≈  90  Mbits/sec      

≈  360  Mbits/sec    

OL  1.2  Gbits/sec  

DIGITIZING  BOARDS  

12  chs  1  bit/ch  

TRIGGER  HITs  SERIALIZATION   TRIGGER  LINKs  

ROIB #1

DCS

ADB#1

ADB#4

ROIB #3

DCS

TIOM #1

192 ADB

48 ROIB

16 TIOM

48  chs  1  bit/ch  

TIOM #16 OL  

1.2  Gbits/sec  ADB  =  Analog  to  Digital  Boards  ROIB  =  ReadOut  Interface  Boards  DCS  =  Detector  Control  System  TIOM  =  Trigger  I/O  Module  

(b) Trigger Readout Path

Figure 2.2: DCH Electronics

der study (such as a cluster counting capabil-ity).

The DCH provides charged particle tracking,dE/dx, and trigger information. The front-endelectronics measures the drift time of the firstelectron and the total charge collected on thesense wires, and generates the information tobe sent to the L1 trigger.

The DCH front-end chain can be divided intothree different blocks:

Very Front End Boards (VFEB): The VFEBscontain HV distribution and blocking capac-itors, protection networks and preamplifiers.They could also host discriminators. TheVFEBs are located on the (backward) chamberend-plate to maximize the S/N ratio.

Data Conversion and Trigger Pattern Extrac-tion: Data conversion incorporates both TDCs(1 ns resolution, 10 bits dynamic range) andcontinuous sampling ADCs (6 bits dynamicrange). Trigger data contain the status ofthe discriminated channels, sampled at about7 MHz (compared to 3.7 MHz in BABAR). Thissection of the chain can be located either onthe end-plate (where power dissipation, radia-tion environment, and material budget are is-sues) or in external crates (where either micro-coax or twisted cables must be used to carry outthe preamplifier signals).

Readout Modules: The ROMs collect thedata from the DCH FEE and send zero-suppressed data to DAQ and trigger.

The number of links required for data trans-fer to the DAQ system can be estimated basedon the following assumptions: 150 kHz L1 trig-ger rate, 10k channels, 15% chamber occupancyin a 1µs time window, and 32 bytes per chan-nel. At a data transfer speed of 2 Gbits/s perlink, about 40 links are needed. 56 synchronous1.25 Gbits/s links are required to transmit thetrigger data sampled at 7 MHz. The topologyof the electronics suggests that the number ofECS and FCTS links should be the same as thenumber of readout links.

2.3.3 PID Electronics

Forward PID Option: There are currently twodetector options be considered for the forwardPID.

5cm

To DAQ optical

1 Sector -> 48 * 64 = 3072 Channels.

Detector : 12 sectors -> ~ 36 k channels

Cat5 cable

ECS electrical

From TTC optical

1 MAPMT footprint

= > 64 channels

16 to 128 channels per board

-> 20 to 160 boards per sector.

Concentrator Crate

1 to 12 sectors per crate

Power Supply

TDCPGA

FE

ASIC

ADC

Figure 2.3: PID Electronics

SuperB Detector Progress Report

Page 27: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

20 2 Electronics

The first option is to measure the time offlight (TOF) of particles from the interactionpoint to the PID detector. Two implementa-tions are under consideration—a pixel detectorwhich would lead to a large number of read-outchannels ( 7200), or a DIRC-like detector withfused silica bars (plates) which would require amuch smaller ( 192) channel count. Both im-plementations make use of fast Micro ChannelPlate PMTs (MCPPMT) and have to providea measurement of the hit time with a precisionof ˜10 ps. The readout would probably use fastanalog memories which, as of today, are themost plausible solution for a picosecond timemeasurement in this environment. To achievethis time resolution, the clock distribution willhave to be very carefully designed and will likelyrequire direct use of the machine clock at thebeam crossing frequency.

A second option is a Focusing AerogelCherenkov detector. Though the timing re-quirements are less severe, its ˜115,000 channelswould also have to come from MCPPMTs, sincestandard multi-anode PMTs cannot be used inthe high magnetic field where it resides. Sincethe time precision needed is similar to that ofthe barrel, the same type of electronics could beused. At least 50 links would be the minimumnecessary for the data readout, while the ECSand FCTS would require a maximum of about50 additional links.

Barrel PID: The barrel PID electronics mustprovide the measurement of the arrival time ofthe photons produced in the fused silica barswith a precision of about 100 ps rms. The Su-perB detector baseline is a focusing DIRC, usingmulti-anodes photo multipliers. This optical de-sign (smaller camera volume, and materials) re-duces the background sensitivity by at least oneorder of magnitude compared to BABAR thus re-ducing the rate requirements for the front-endelectronics.

The baseline design is implemented with 16-channel TDC ASICs—offering the required pre-cision of 100 ps rms. A 12-bit ADC can providean amplitude measurement, at least for calibra-tion, monitoring and survey, which is transmit-

ted with the hit time. A 16-channel front-endanalog ASIC must be designed to sample anddiscriminate the analog signal. Both ASICswould be connected to a radiation-tolerantFPGA which would handle the hit readout se-quence and push data into the L1 trigger latencybuffers.

This front-end electronics must all sit on theMAPMT base, where space is very limited andcooling is difficult. However, crates concentrat-ing front-end data and driving the fast opti-cal links can be located outside the detector ina more convenient place where space is avail-able. They would be connected to the front-end through standard commercial cables (likeCat 5 Ethernet cables). The readout mezza-nines would be implemented there, as well as theFCTS and ECS mezzanines from where signalswould be forwarded to the front-end electronicsthrough the same cables.

The system would be naturally dividedinto 12 sectors. Using the baseline camerawith 36,864 channels, 150 kHz trigger rate,100kHz/channel hit rate, 32 data bits/hit, and2 Gbits/s link rate, the readout link occupancyshould be only ˜15%, thus offering a pleasantsafety margin. A camera using another modelof PMTs with one-half the number of channelsis also being studied.

An alternative readout option would be touse analog memories instead of TDCs to per-form both time and amplitude measurements.This option retains more information on thehit signals but would likely be more expensive.Its advantages and disadvantages are still understudy.

2.3.4 EMC Electronics

Two options have been considered for the EMCsystem design—a BABAR-like push architecturewhere all calorimeter data are sent over syn-chronous optical 1 Gbit/s links to L1 latencybuffers residing in the trigger system, or a “trig-gered” pull architecture where the trigger sys-tem receives only sums of crystals (via syn-chronous 1 Gbit/s links), and only events ac-cepted by the trigger are sent to the ROMsthrough standard 2 Gbits/s optical links.

SuperB Detector Progress Report

Page 28: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

2.3 Subsystem-specific Electronics 21

CSP &ShaperCharge

SensitiveAmplifire& Shaper

25 25 1.25Gbit/sSerializer

Optical Links

16 bits

CSP &ShaperCharge

SensitiveAmplifire& Shaper

16 bits

CSP &ShaperCharge

SensitiveAmplifire& Shaper

CSP &ShaperCharge

SensitiveAmplifire& Shaper

40 40

Triggerdata

aggregator

L1Buffer

L1Buffer

L1Buffer

L1Buffer

FTCS control

Crystals

40 seriallinks

13 bits

25 seriallinks

13 bits

Triggerprimitives

RangeSwitches

+12 bits ADC

RangeSwitches

+12 bits ADC

RangeSwitches

+12 bits ADC

RangeSwitches

+12 bits ADC

14MHz

3.5MHz

Readoutdata

aggregator

FORWARD

BARREL

16 bits

16 bits

Triggerdata

aggregator

Readoutdata

aggregator

Crystals

2Gbits/sSerializer

Triggereddata

1.25Gbit/sSerializer

Triggerprimitives

2Gbits/sSerializer

Triggereddata

45

45

80

80

Could be 7MHz

Figure 2.4: EMC Electronics

The triggered option, shown in Fig. 2.4, re-quires a much smaller number of links and hasbeen chosen as the baseline implementation.The reasons for this choice and the implicationsare discussed in more detail below.

To support the activated liquid-source cali-bration, where no central trigger can be pro-vided, both the barrel and the end-cap readoutsystems need to support a free running “self-triggered” mode where only samples with an ac-tual pulse are sent to the ROM. Pulse detectionmay require digital signal processing to suppressnoisy channels.

Forward Calorimeter The 4500 crystals areread out with PIN or APD photodiodes. Acharge preamplifier translates the charge intovoltage and the shaper uses a 100 ns shapingtime to provide a pulse with a FWHM of 240 ns.

The shaped signal is amplified with two gains(×1 and ×64). At the end of the analog chain,an auto-range circuit decides which gain willbe digitized by a 12 bit pipeline ADC runningat 14 MHz. The 12 bits of the ADC plus onebit for the range thus cover the full scale from10 MeV to 10 GeV with a resolution better than1%. A gain is set during calibration using aprogrammable gain amplifier in order to opti-mize the scale used during calibration with aneutron-activated liquid-source system provid-ing gamma photons around 6 MeV.

Following the BABAR detector design, a pusharchitecture with a full granularity readoutscheme was first explored. In this approach, theinformation from 4 channels is grouped, using

copper serial links, reaching an aggregate rateof 0.832 Gbits/s per link to use up most of thesynchronous optical link’s 1 Gbit/s bandwidth.A total of 1125 links are required. The main ad-vantage of this architecture is the flexibility ofthe trigger algorithm that can be implementedoff-detector using state of the art FPGAs with-out constraining their radiation resistance. Themain drawback is the large cost due to the hugenumber of links.

The number of links can be reduced by sum-ming channels together on the detector side, andonly sending the sums to the trigger. The natu-ral granularity of the forward detector is a mod-ule which is composed of 25 crystals. In thiscase, data coming from 25 crystals is summedtogether, forming a word of 16 bits. Then thesums coming from 4 modules are aggregated to-gether to produce a payload of 0.896 Gbits/s. Inthis case, the number of synchronous links to-ward the trigger is only 45. The same numberof links would be sufficient to send the full de-tector data with a 500 ns trigger window. Thisarchitecture limits the trigger granularity, andimplies more complex electronics on the detec-tor side, but reduces the number of links by alarge factor (from 1125 down to 90). However,it cannot be excluded that a faster chipset willappear on the market which could significantlyreduce this implied benefit.

Barrel Calorimeter The EMC barrel reusesthe 5760 crystals and PIN diodes from BABAR,with, however, the shaping time reduced from1µs to 500 ns and the sampling rate doubledfrom 3.5 MHz to 7MHz. The same considera-tions about serial links discussed above for theforward EMC apply to the barrel EMC. If fullgranularity data were pushed synchronously tothe trigger, about 520 optical links would benecessary.

The number of synchronous trigger links canbe drastically reduced by performing sums of4 × 3 cells on the detector side, so that 6 suchenergy sums could be continuously transmittedthrough a single optical serial link. This permitsa reduction in the number of trigger links so asto match the topology of the calorimeter elec-

SuperB Detector Progress Report

Page 29: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

22 2 Electronics

Figure 2.5: IFR Electronics

tronics boxes, which are split into 40 φ sectorson both sides of the detector. Therefore, thetotal number of links would be 80 both for thetrigger and the data readout toward the ROMs,including a substantial safety margin (> 1.5).

2.3.5 IFR Electronics

The IFR is equipped with plastic scintillatorscoupled to wavelength shifting fibres. Althoughdifferent options have been explored, it is cur-rently assumed that single photon counting de-vices (SiPM) will be located “inside” the iron,as close as possible to the scintillating assem-blies. Each SiPM will be biased and read outthrough a single coaxial cable.

A schematic diagram of the IFR readout elec-tronics is shown in Fig. 2.5. The first stageof the readout chain is based on the IFR ABCboards which provide (for 32 channels each):

• Amplification, presently based upon off-the-shelf components (COTS).

• Individually programmable bias voltagesfor the SiPMs.

• Comparators with individually pro-grammable thresholds, presently based onCOTS.

To minimize the length of the coaxial cablesfrom the SiPMs to the IFR ABC boards, theseboards need to be placed as close to the ironyoke as possible. The digital outputs of theIFR ABC boards will then be processed in dif-ferent ways for the IFR barrel and end-caps.

IFR Barrel The barrel scintillation elementsare mounted parallel to the beam axis. Thetime of arrival of pulses from both ends of thescintillating elements must be recorded so thatthe z-position of particle hits can be determinedduring reconstruction. The signals are readout with IFR TDC 64-channel timing digitizerboards.

The total TDC channel count estimate forthe barrel is 14,400, which comes from the 3600scintillating assemblies in the barrel that areread out at both ends with 2 comparators (withdifferent thresholds) per end to improve timing(and position) resolution.

IFR End-caps: The signals from the scintil-lators in the IFR end-caps (which are posi-tioned vertically and horizontally) are read outwith IFR BiRO 128 channel “Binary Readout”boards, which sample the status of the inputlines and update a circular memory buffer fromwhich data are extracted upon trigger request.

The total channel count estimate for the end-caps is 9,600 BiRO channels coming from thetwo end caps, each with 2,400 scintillating as-semblies in X, and 2,400 scintillating assembliesin Y read out into a single comparator per chan-nel.

The IFR TDC and IFR BiRO digitizersshould be located as closely as possible to theIFR ABC boards to minimize the cost of theinterconnecting cables, preferably in an area oflow radiation flux. In this case, commercialTDC ASICs could be used in the design. Alter-natively, radiation-tolerant TDCs could be usedcloser to the detector. The FPGAs used in thedigitizers should be protected against radiationeffects by architecture and by firmware design.

The output streams from the IFR TDC andIFR BiRO boards go through custom “dataconcentrators” to merge the data coming from anumber of digitizers, and send the resulting out-put data to the ROMs via the standard opticalreadout links.

In total, 225 IFR TDC boards (12 crates) and75 IFR BiRO boards ( 4 crates) are needed. Thetotal number of links to the ROMs is presently

SuperB Detector Progress Report

Page 30: SuperB Detector Technical Design Report...Technical Design Report Abstract This report describes the technical design detector for SuperB. E. Grauges, Universitat De Barcelona, Fac

2.3 Subsystem-specific Electronics 23

estimated to be 24 for the barrel (2 links perdigitizer crate), and 16 for the end-caps (4 linksper digitizer crate).

To optimize the electronics topology, thenumber of ECS and FCTS links should matchthe number of readout links.

2.3.6 Level-1 Trigger Electronics

SuperB Detector Progress Report