51
Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

Embed Size (px)

Citation preview

Page 1: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

Summary Track 1Online Computing

Pierre VANDE VYVRE – CERN/PH

Page 2: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 2

Online Computing Track: oral 61- A Embedded Linux System Based on PowerPC (Miss. YE, Mei) 62 - The ALICE Experiment Control System (Mr. CARENA, Francesco) 74 - LHCb Configuration Database project (Miss. ABADIE, Lana) 95 - The introduction to BES computing environment (Dr. CHEN, Gang) 119 - The DAQ system for the Fluorescence Detectors of the Pierre Auger Observatory (Dr. MATHES, Hermann-Josef) 157 - Jefferson Lab Data Acquisition Run Control System (GYURJYAN, Vardan) 209 - Experience with Real Time Event Reconstruction Farm for Belle Experiment (Prof. ITOH, Ryosuke) 217 - The performance of the ATLAS DAQ DataFlow system (Dr. UNEL, gokhan) 223 - A Hardware Based Cluster Control And Management System (Mr. PANSE, Ralf) 252 - New experiences with the Alice High Level Trigger Data Transport Framework (Dr. STEINBECK, Timm M.) 254 - A Control Software for the Alice High Level Trigger (Dr. STEINBECK, Timm M.) 266 - Testbed Management for the ATLAS TDAQ (Mr. ZUREK, Marian) 281 - Migrating PHENIX databases from object to relational model (SOURIKOVA, Irina) 302 - Control in the ATLAS DAQ System (LIKO, Dietrich) 329 - The PHENIX Event Builder (Dr. WINTER, David) 331 - Integration of ATLAS Software in the Combined Beam Test (Dr. GADOMSKI, Szymon) 411 - The High Level Filter of the H1 Experiment at HERA (Dr. CAMPBELL, Alan) 422 - The Architecture of the ZEUS second level Global Tracking Trigger (Dr. SUTTON, Mark) 432 - Fault Tolerance and Fault Adaption for High Performance Large Scale Embedded Systems (Dr. SHELDON, Paul) 434 - The ALICE High Level Trigger (Mr. RICHTER, Matthias) 437 - Simulations and Prototyping of the LHCb Level1 and High Level Triggers (Dr. SHEARS, Tara) 449 - DZERO Data Aquistiion Monitoring and History Gathering (Prof. WATTS, Gordon) 477 - The DZERO Run II Level 3 Trigger and Data Aquisition System (CHAPIN, D) 482 - A Level-2 trigger algorithm for the indentification of muons in the Atlas Muon Spectrometer (Dr. DI MATTIA, Alessandro)

Page 3: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 3

Preamble

By necessity,this talk will focus on some areas,

ignore many and do justice to none.

(Amber Boehnlein in plenary talk on Monday)

Page 4: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 4

Trigger and Data-flow

UpgradesAccelerator and/or detector upgradeImproved trigger or larger statisticsTechnology obsolescence or not delivering

Running experimentsPierre Auger ObservatoryD0

Page 5: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/200401/10/2004 55

BEPCBEPC

BEPC: Beijing BEPC: Beijing Electron Positron Electron Positron ColliderCollider

Started in 1989Started in 1989 2~5GeV/C2~5GeV/C Being upgraded toBeing upgraded to Dual-Ring, Dual-Ring, (3~10)×10(3~10)×103232 cm cm-2-2ss-1-1

Restart in spring Restart in spring 20072007

Page 6: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

CHEP’04, 27th. Sep. – 1st. Oct., 2004

6

Overview of the Development EnvironmentOverview of the Development Environment

TimeSys Analysis ToolsWindows

Online Software , Linux

Gigabit Ethernet

100M Ethernet

Readout PCRH 7.3 ~ 9.0

TimeSys Linux Target

FEEs

PowerPC:MVME5500

Debug

Host X86

TimeSys SDK & IDE Env.RH 7.3 ~ 9.0

Host X86

Page 7: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

CHEP’04, 27th. Sep. – 1st. Oct., 2004

7

System LatencySystem Latency

Ti meSys VxWorksItem MVME5500->2431 MVME5100->2431

One VME Read 1942(ns) 1330(ns)

One VME Write 407(ns) 440(ns) DMA(A32/D32/BLT) 18. 6 MBps 18. 9 MBpssi ze=4096(Bytes)

DMA(A32/D64/MBLT) 35. 9 MBps 37. 9 MBpssi ze=4096(Bytes)

DMA Overhead(K) 7. 1/ 17. 1(μ s) 8. 6(μ s)Interrupt Latency( H.S.) 9. 9/ 14.4(μ s) 10. 2(μ s)Network Speed(1024) 94. 26(Mbps) 90. 4(Mbps)Network CPU Rate 15% 52%Network Idle 85% 48%

RT Linux RT Kernel

Page 8: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/200401/10/2004 88

BESIII Storage BESIII Storage RequirementRequirement

Data TypeData Type Data Data Volume (TB)Volume (TB)

Storage Storage MediaMedia

RawRaw 240240 TapeTape

Rec.Rec. 14401440 TapeTape

DSTDST 120120 DiskDisk

M.C. Rec.M.C. Rec. 14401440 TapeTape

M.C. DSTM.C. DST 120120 DiskDisk

>3000 TB of data will be accumulated in 5 years>3000 TB of data will be accumulated in 5 years

Page 9: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 9

H1 at HERA

H1 HLT - HERA luminosity upgrade - Transition from VME SBCs

to commodity hw- Corba as transport layer for

control and data

Page 10: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 10

Page 11: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 11

ZEUS at HERA

ZEUS TRG L2- HERA luminosity upgrade- ZEUS detector upgrade:combined trigger of

CTD (Central Tracking) with MVD (Micro Vertex) and STT (Straw Tube Tracker)

- Transition from Transputers to commodity hw

Page 12: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 12

Page 13: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 13

Data Collection: The ChallengeData Collection: The Challenge

Blue and Yellow Beams

cross at ~10MHz

Collisions occur at ~10kHz

hadronsphotons

leptons

• High rates• Large event sizes (Run-4: >200 kb/event)• Interest in rare physics processes

=> Big Headache• How do we address these challenges?

– Level-1 triggering

– Buffering & pipelining: “deadtime-less” DAQ

– High Bandwidth (Run-4: up to 400 MB/s archiving)

– Fast processing (Level-2 triggering, for example)

Page 14: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 14

Event Builder OverviewEvent Builder Overview

Data stream from granuleSEB

GBitSWITCH

ATP

ATP

ATP

ATP

SHORTTERM

STORAGE

SEB

SEB

ATP

EBC

Data stream from granule

Data stream from granule

Control provided by Run Control

• New design: Three functionally distinct programs derived from the same basic object– SubEvent Buffer (SEB): Collects data for a single subsystem – a “subevent”– Event Builder Controller (EBC): Receives event notification, assigns events, flushes system– Assembly Trigger Processor (ATP): Assembles events by requesting data from each SEB,

writes assembled events to short-term storage, can also provide Level-2 trigger environment

• New platform: Linux instead of Windows – Synchronous I/O superior to even Win32’s overlapped I/O– OS overheads much lower

Page 15: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 15

DB upgrade

Phenix Change the underlying storage technology Move from proprietary OODB (Objectivity) to open source RDB Preserve the existing API by providing a new implementation Storage of metadata and calibration data Approach:

Calibration metadata as simple types Calibration banks in BLOBs ( Binary Large Objects ) ROOT I/O can be used to serialize banks into BLOBs RDBC ( ROOT DataBase Connectivity ) to send BLOBs to the

database Allows fast index-based calibration retrieval PostgreSQL chosen as RDBMS

Page 16: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004CHEP’04 Interlaken, CH Irina Sourikova 16

Software layers Couple of months spent on installing

and testing new software After fixing few bugs adopted the

following: RDBC - talks to RDBs from ROOT

libodbc++ - c++ library for accessing RDBs, runs on top of ODBC, simplifies the code

unixODBC - free ODBC interface

psqlodbc - official PostgrSQL ODBC driver

RDBC

libodbc++

unixODBC

psqlodbc

DB

PhenixDB API

User application

Page 17: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

17R.Itoh, CHEP2004(Online Computing),9/29/04

● Requires more statistics for further studies● Only 25% of raw data used for physics analysis● Real time full event reconstruction (DST production)

Latest results (ICHEP04):sin2

1(JK

s)=0.666±0.046

sin21(K0)=0.06±0.33±0.09

CPV in B0

X(3872)

Page 18: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

18R.Itoh, CHEP2004(Online Computing),9/29/04

3. Architecture

Serverroom

Belle Event Builder

DTF

Control room

E-hutComputer Center

1000base/SX100base-TX

PCs are operated by Linux (RH7.3)

DellCX600

3com4400

3com4400

Dual Athlon 1.6GHz

.....

85 nodes

inputdistributor

outputcollector

control

disk serverDNSNIS

PlanexFMG-226SX

Dual Xeon(3.0GHz)

Dual Xeon(2.8GHz)

Dual PIII(1.3GHz)(170 CPUs)

Page 19: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 CHEP 2004, H.-J. Mathes et al.19

The Pierre Auger Observatory

Southern Obs. under construction- 6 x 2 telescopes since 07/2004- 6 more telescopes expected in early 2005

Argentina, province Mendoza (Malargüe)

Page 20: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 CHEP 2004, H.-J. Mathes et al.20

Telescope level DAQ

Telescope level hardware440 photomultipliersVme like bus systemcoupled via FireWire (IEEE-1394)1st, 2nd level trigger in FPGAdiskless Linux client

Page 21: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 21

D0 L3 & DAQ

Serving D0 needs since May 2002

Page 22: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 22

Trigger and Data-flow

Future experimentsLHC: ALICE, ATLAS, CMS, LHCbBTev

Page 23: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 23

ALICE HLT

•Commodity PCs•Dedicated firmware for local pattern recognition•Performance of track reconstruction close to offline

Page 24: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 24

muo

nm

uon

App

roxi

mat

ed

App

roxi

mat

ed

Muo

n tra

ject

ory

Muo

n tra

ject

ory

After L1 emulation 1 hit from each Trigger 1 hit from each Trigger Station is required to start the Station is required to start the Pattern Recognition on MDT Pattern Recognition on MDT data.data.

1 hit from each Trigger 1 hit from each Trigger Station is required to start the Station is required to start the Pattern Recognition on MDT Pattern Recognition on MDT data.data.

Global Pattern recognition:seeded by the trigger chamber data

Use the L1 simulation Use the L1 simulation code to select the RPC code to select the RPC

Trigger Pattern Trigger Pattern

Valid coincidence in the Low-Pt CMA

Page 25: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

25G

ökha

n Ü

nel /

CH

EP

200

4- I

nter

lake

n

ATLAS

EB only setups

0.00

2000.00

4000.00

6000.00

8000.00

10000.00

1 3 5 7 9 11 13 15 17# SFIs

EBrate (Hz) 3ROS 12ROS 18ROS

24ROS 5ROS 3GHz

EventBuilding RateSolid lines: ROS=2GHzDashed line: ROS=3GHz

8.55 kHzx12.4k=106MB/sROS cpu limit

Small & Large systems have the same max EB rate

no penalty as event size grows

Can run 24 ROS vs 16 SFI EB system stably

Faster ROS does a better job (we hit the io limit)

110MB/s per SFI NIC limit

ROS : 12 emulated input channels, 1kB /channel

SFI : No output to EF

More ROS = Bigger Events !

9.66 kHzx12.4 k = 120MB/s ROS NIC limit

Page 26: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 26

LHCb

Data FlowLevel 0 trigger

1MHz

L1 / HLT Farm

MultiplexingLayer

FE FE FE FE FE FE FE FE FE FE FE FE

Switch Switch

Level-1Traffic

HLTTraffic

126Links

44 kHz5.5 GB/s

323Links4 kHz

1.6 GB/s29 Switches

32 Links

94 SFCs

Front-end Electronics

Gb EthernetLevel-1 Traffic

Mixed TrafficHLT Traffic

94 Links7.1 GB/s

TRM

Sorter

TFCSystem

L1-Decision

StorageSystem

Readout Network

Switch Switch Switch

SFC

Switch

CPUCPU

CPU

SFC

Switch

CPUCPUCPU

SFC

Switch

CPUCPU

CPU

SFC

Switch

CPUCPU

CPU

SFC

Switch

CPUCPU

CPUCPUFarm

62 Switches

64 Links88 kHz

~1800 CPUs

Big disk

Page 27: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 27

Custom simulation results

L1 latency for events in subfarm:

• TDR network configuration modelled

• 25 events in a MEP

• Simulated processing times cut off at 50 ms

If processing time cut at 30 ms < 0.5% of events lost.

Page 28: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 28

BTev Trigger DAQ System

• Select good physics from the start Trigger Level 1

• Sophisticated and massive data processing Large processing farm

• Mission critical Special attention to system design reliability

• Integrated design approach combining hw, sw and modelling

Page 29: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 29

Page 30: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 30

Control

Huge needsComplexity of overall system

Lots of tools and technologies:State machinesDedicated languagesXML

Page 31: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 31

COOL- Control Oriented Ontology Language

Based on RDFS (Resource Definition Framework Schema). Control domain knowledge creation and reuse. Knowledge base sharing. Domain knowledge and operational knowledge separation. Domain knowledge analyses. Extensible via XML-namespaces and RDF modularization.

Essentially, ontology is the study of what actually is.For most people, for most purposes,

ontology ultimately comes down to physics. (John R Gregg, cited Ken Peach in plenary on Wednesday)

Note of the rapporteur

Page 32: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 32

Run-control GUI

Page 33: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

F.Carena, CERN/ALICEF.Carena, CERN/ALICE 3329 September 2004, CHEP 2004, Interlaken29 September 2004, CHEP 2004, Interlaken

ALICE Experiment ALICE Experiment ControlControl The ECS is a layer of software on top of the ‘online The ECS is a layer of software on top of the ‘online

systems’ controlling the Activity Domainssystems’ controlling the Activity Domains The integration of the four ‘online systems’ with the ECS is The integration of the four ‘online systems’ with the ECS is

based on domain dependent interfaces based on domain dependent interfaces Interfaces based on FSMs (SMI++) and distributed objects Interfaces based on FSMs (SMI++) and distributed objects

communication (DIM)communication (DIM)

HLTDAQTRGDCS

ECS

Page 34: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

F.Carena, CERN/ALICEF.Carena, CERN/ALICE 3429 September 2004, CHEP 2004, Interlaken29 September 2004, CHEP 2004, Interlaken

PCA operations (3)PCA operations (3)Exclude detectors from the partition or re-include them Exclude detectors from the partition or re-include them

in the partitionin the partition

Page 35: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 35

ALICE HLT Control

Page 36: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004CHEP04 - Interlaken

Control of the ATLAS TDAQ system 36

ATLAS DAQ Control in combined Testbeam 2004

Stable operation from the start – Advantage of the component model

•CLIPS: standard open source expert system•CORBA for the communication

Page 37: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 37

Information from the schemaList of devicesConnectivities between devices

Timing & Fast Control (TFC) dataflow

LHCbconfiguration

database

Experimental Equipment

Experiment Control System (ECS)

PVSS (SCADA)

Page 38: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 38

Page 39: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 39

Integration

Future experiments getting prepared for startupTest of prototypes in test beamsLarge integration tests

Special emphasis on integration:FrameworksCluster control and monitoring

Page 40: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 40

DATAFLOW

H

L

T

ROS

LVL2

DE

T RO

Calo MuTrChOther detectors

Trigger DAQ

FE Pipelines

40 MHz

LVL1

2.5

s

Lvl1 acc = 75 kHzRead-Out DriversROD ROD ROD

Read-Out LinksEvent data

Read-Out BuffersROB ROB ROB

Reg

ion

of

Inte

rest

ROIB

RoI Builder

L2P

L2SV

L2N

L2 SupervisorL2 N/work

L2 Proc Unit

RoI requests

Read-Out Sub-systems

RoI data = 1-2%

Lvl2 acc = ~2 kHz

EB

SFI

EBN

Sub-Farm Input

Event Building N/work

Event Builder

Event Filter N/workEFN

Dataflow ManagerDFM

~4

GB

/sEvent Filter

EFPEFP

EFPEFP

~ sec

Event FilterProcessors

EFacc = ~0.2 kHz Sub-Farm OutputSFO

40 MHz

75 kHz

~2 kHz

~ 200 Hz

120 GB/s

~ 300 MB/s

~2+4 GB/s

ATLAS Trigger and DAQ

All subsystems will be used in CTB!All subsystems will be used in CTB!

Page 41: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 41

TDAQ in Test Beam

DFM

Gatewayin C.C.

SFI

Tra

cker

Cal

oM

uon

det.

LVL1 calo

ROS

ROSRPC

ROSTGC

ROSCSC

ROS

LVL1 mu

Tile

ROS

ROSTRT

ROSSCT

ROSPixel

ROS

LAr

ROS

MDT

LVL1

monitoringrun control

pROS

data

net

wor

k (G

bE)

EF farmin C.C.

SFO

Remote farms

Mass storagein C.C.

LVL2farm

EF farm

Not all elements are always used simultaneously.Maximum integration is desired and aimed for.

Page 42: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 42

Frameworks

ALICE HLT software framework:transparent parallel processing distributed on a large farm

ATLAS HLT software: Reuse offline

components Common to Level-2

and EF

HLTSSW

Steering Monitoring Service

1..*

MetaData Service

1..*ROB DataCollector

DataManager

HLTAlgorithms

Processing Task

Event DataModel

L2PU Application

<<import>>

Event DataModel

Reconstr. Algorithms

<<import>>

StoreGateAthena/Gaudi

<<import>><<import>>

Interface

Dependency

Package

Event Filter

HLT Core Software

Offline Core Software Offline Reconstruction

HLT Algorithms

HLT Selection Software Framework ATHENA/GAUDI Reuse offline components Common to Level-2 and EF

Offline algorithms used in EF

Page 43: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/200427.9.- 01.10. CHEP 04,Interlaken

R.Panse, KIP Heidelberg 43

KIPALICE HLT farm

Computing cluster

CIA Cards

Service Network

Administrator

TCP/IP

Cluster Network

Users

Dedicated hw for the remote administration, control and monitoring of a large cluster

Page 44: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 44

Presentation

Lot of emphasis on making the TRG/DAQ system less opaque

Belle: DQM using L3 results Parameters of hadronic events (visible energy, event shape....)

H1: Online event display D0 Monitoring and

History Gathering

Page 45: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

45

Monitor Server

Display Display Display

The DØ L3 Trigger/DAQ System

Level 1

Level 2

Level 3

DA

Q R

ead

ou

t

Tape Archive

Typical Collider Multilevel

Trigger SystemRead Out

Crate

Read Out

Crate

Read Out

Crate

Switch

Farm Node

Farm Node

Farm Node

See #477 (Chapin) for details

Routing

MasterSupervisor

This project would not be possible without all the work of the DØ DAQ Group!

Page 46: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 46

Processors

Farm node hardware breaks often (hard drive, fan)

Correlation with age and component quality Software must assume nodes will crash/ be

unavailable Very strict vendor requirements p2p filesharing system for software distribution

Page 47: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 47

Processors (2)

Examples from ALICE, D0 How much will we get from

Moore’s law in the future ? New hw platforms:

64 bits Multi-core CPUs

Page 48: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 48

Operating Systems

Online computing used to be a zoo for OS Special OS tend now to disappear from the landscape Present generation of experiments still using kernels:

But even there, Linux is becoming a credible competitor VxWorks vs Timesys RT-Linux for VME/PowerPC (BES)

Transition out of Windows (PHENIX at RHIC) New developments (even extremely demanding such

as BTev or LHCb L1) plan to use standard OS (Linux)

Page 49: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 49

Networking

Switched Ethernet has not win…… it has put (almost) all the others KO

Myrinet: used by STAR and baseline for CMS Ethernet: the rest of the HEP world

Page 50: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 50

Databases

Database in online environment ?Handle with care !

Belle: constant db replicated on each node of HLT farm to improve efficiency

D0: test with Oracle. Finally adopted RDB + ROOT

Page 51: Summary Track 1 Online Computing Pierre VANDE VYVRE – CERN/PH

01/10/2004 51

Epilogue

Online computing track: very direct report of findings

Major shift of technology: pervasive adoption of commodity computing and networking in ALL areas of trigger and DAQ, even the more challenging

Growing importance of Control software and simulation

See you at CHEP ’06 to reach new heights !