Upload
adona-nicolosi
View
214
Download
0
Tags:
Embed Size (px)
Citation preview
31 Gennaio 2005P. Capiluppi - CSN1 - Roma
CMS Computing Model
Commento:
Costruito ad uso e consumo per la definizione delle risorse: LHCC review di Gennaio 2005
Ora e’ un “planning document”!?
LH
CC
LH
CC
Revie
w o
f C
om
pu
tin
g
Revie
w o
f C
om
pu
tin
g
Resou
rces f
or
the L
HC
R
esou
rces f
or
the L
HC
exp
eri
men
tsexp
eri
men
ts
David Stickland
Jan 2005
Page 2
Baseline and “Average”
In the Computing Model we discuss an initial baseline
– Best understanding of what we expect to be possible– We will adjust to take account of any faster than expected
developments in for example grid middleware functionality– Like all such battle plans, it may not survive unscathed the
first engagement with the enemy…
We calculate specifications for “Average” centers.
– Tier-1 centers will certainly come in a range of actual capacities (available to CMS)
• Sharing with other experiments…• Overall T1 capacity is not a strong function of NTier1
– Tier-2 centers will also cover a range of perhaps 0.5-1.5 times these average values
• And will probably be focused to some particular activities (Calibration, Heavy-Ion,..) that will also break this symmetry in reality
3P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
Definizioni nel Computing Model (CM) di CMS
Definizioni nel Computing Model (CM) di CMS
I Tier-n sono “nominali” o meglio “average/canonical Tiers”7 Tier1 (incluso uno speciale al CERN)25 Tier2 (di cui uno speciale al CERN: ~2-3 canonical Tier2)Il primo anno di riferimento e’ il 2008 (anche se e’ un po’ confuso nel Computing Model paper)Le risorse devono essere implementate nell’anno di “riferimento -1” (valutazione dei costi) “We expect 2007 requirements to be covered by rampup needed for 2008”Scenario assunto:
4P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
CM - Event Data Format (summary)CM - Event Data Format (summary)
Event size Event size 1.5 MByte1.5 MByte
Event rate Event rate 150 Hz150 Hz
Second RAW Second RAW data copy at data copy at
Tier1sTier1s
RECO ObjectsRECO Objects
2+1 reproc/year2+1 reproc/year
AODAOD
Primary basis Primary basis of analysesof analyses
Events Events “catalog”“catalog”
A total of ~9.5 PByte per yearA total of ~9.5 PByte per year
Event Format
Content Purpose Event size Events / year Data volume
(MByte) (PByte)
DAQ-RAW
Detector data in FED format and the L1 trigger result.
Primary record of physics event. Input to online HLT
1-1.5 1.5 × 109 =107 seconds × 150Hz
–
RAW Detector data after on-line formatting, the L1 trigger result, the re-sult of the HLT se-lections (“HLT trigger bits”), potentially some of the higher-level quan-tities calculated during HLT processing.
Input to Tier-0 reconstruction. Primary archive of events at CERN.
1.5 3.3 × 109 =1.5 × 109 DAQ events × 1.1 (dataset overlaps) × 2(copies)
5.0
RECO Reconstructed objects (tracks, vertices, jets, electrons, muons, etc. including reconstructed hits/clusters)
Output of Tier-0 recon-struction and subsequent re-reconstruction passes. Sup-ports re-fitting of tracks, etc.
0.25 8.3 × 109 =1.5 × 109 DAQ events × 1.1 (dataset overlaps) × [2 (copies of 1st pass) + 3 (reprocessings/year)]
2.1
AOD Reconstructed objects (tracks, vertices, jets, electrons, muons, etc.). Possible small quantities of very localized hit information.
Physics analysis 0.05 53 × 109 =1.5 × 109 DAQ events × 1.1 (dataset overlaps) × 4 (versions/year) × 8(copies perTier − 1)
2.6
TAG Run/even number, high-level physics objects, e.g. used to index events.
Rapid identifi-cation of events for further study (event directory).
0.01 – –
LH
CC
LH
CC
Revie
w o
f C
om
pu
tin
g
Revie
w o
f C
om
pu
tin
g
Resou
rces f
or
the L
HC
R
esou
rces f
or
the L
HC
exp
eri
men
tsexp
eri
men
ts
David Stickland
Jan 2005
Page 5
Event Sizes and Rates
Raw Data size is estimated to be 1.5MB for 2x1033 first full physics run
– Real initial event size more like 1.5MB• Expect to be in the range from 1 to 2 MB
• Use 1.5 as central value
– Hard to deduce when the event size will fall and how that will be compensated by increasing Luminosity
Event Rate is estimated to be 150Hz for 2x1033 first full physics run
– Minimum rate for discovery physics and calibration: 105Hz (DAQ TDR)
– Standard Model (jets, hadronic, top,…) +50Hz– LHCC study in 2002 showed that ATLAS/CMS have
~same rates for same thresholds and physics reach
6P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
CM - Data Model specifications (or location)
CM - Data Model specifications (or location)
I RAW, RECO e AOD sono “divisi” in O(50) “Primary Datasets” Definiti dalla “trigger matrix” (L1 Trigger + High Level Trigger) Max 10% overlap
Seconda copia dei RAW data distribuita nei Tier1 Solo il CERN Tier0 ha il full RAW sample
I RECO (leggi DST) sono distribuiti tra i Tier1 Nessun Tier1 li ha tutti, ma ognuno ha lo share che corrisponde ai RAW
data residenti 2+1 reprocessing per anno, 2 nei Tier1 e 1 al CERN (LHC downtime)
Incluso il reprocessing dei dati simulati
Gli AOD sono tutti in ogni Tier1 4 versioni per anno, residenti su disco
Gli AOD e i RECO possono essere distribuiti ad ogni Tier2 Meta’ degli AOD “correnti” e/o RECO di max 5 primary datasets
I “non-event” data (calibrazioni, allineamenti, etc.) sono presso i Tier1(2) che ne fanno l’analisi e al CERN Tier0/Tier1/Tier2/Online-farm
7P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
CM - Data FlowCM - Data Flow
CNAFCNAF
Bo, Ba, Bo, Ba, LNL, Pd, LNL, Pd, Pi, Rm1Pi, Rm1
MC dataMC data
Reprocessed dataReprocessed data
8P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
CM - Analysis Model (descrizione-1/2)
CM - Analysis Model (descrizione-1/2)
Verra’ definito nel C-TDR, ed e’ ancora in evoluzione L’attivita’ del P-TDR ne dettera’ le caratteristiche iniziali
Perche’ evolvera’ nel tempo comunque… Grid potra’ dare un “significant change”
Ma si parte in modo tradizionale Tuttavia ci si aspetta che gli analisti abbiano un accesso ad una
User Interface presso i Tier2 e/o i Tier3 E solo per alcuni users i jobs verranno sottomessi direttamente ai Tier1
(o anche ai Tier2), la maggioranza ci accedera’ via Grid tools
I Dati sono navigabili in un “primary dataset”: AOD RECO RAW (vertical streaming)
La navigazione e’ “protetta” Comunque non e’ sensato navigare dagli AOD ai RECO, ma solo dai RECO ai
RAW Infatti chiedere agli AOD oggetti che sono solo nei RECO Infatti chiedere agli AOD oggetti che sono solo nei RECO devedeve produrre eccezione produrre eccezione
L’Event Data Model (framework) e’ in corso di ridefinizione:
Con idee anche di CDF/BaBar
9P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
CM - Analysis Model (descrizione-2/2)
CM - Analysis Model (descrizione-2/2)
RAW and RECO analysis Significant amount of experts analysis
Studi di trigger e detector (incluse calibrazioni, allinementi, fondi) Base per re-reconstruction (new-RECO) e per creare sub-samples
(new-AOD, TAGs, Event directories, etc.) Dominante all’inizio dell’esperimento (2007 e 2008?)
Principalmente ai Tier1 (ma anche qualche Tier2 specializzato)
RECO and AOD analysis Significant physics analysis
90% of all physics analysis can be carried out from AOD data samples Less than 10% of analyses should have to refer to RECO
Principalmente ai Tier2 (ma anche ai Tier3?)
Event Directories & TAGs analysis Fanno parte degli “users skims” (o Derived Physics Data) anche
se sono prodotti “ufficialmente” quando si creano gli AOD Sono nei Tier2 e Tier3
10P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
CM - Data and Tier0 activitiesCM - Data and Tier0 activitiesOnline Streams (RAW) arrivano in un 20 day input buffer
Archiviati su nastro al Tier0Prima ricostruzione RECO
Archiviati su nastro al Tier0RAW+ RECO distribuiti ai Tier1
1/NTier1 per ogni Tier1AOD distribuiti a tutti i Tier1Ri-ricostruzione al Tier0 (LHC downtime)
RECO e AOD distribuiti come sopra Tempo impiegato ~4 mesi I restanti 2 mesi per prima ricostruzione completa degli HI data
Possibilmente col contributo di “qualche” Tier2
Eff FactorsCPU scheduled 4588 kSI2K 85.00%Disk 407 Tbytes 70.00%Active tape 3775 Tbytes 100.00%Tape I/O 600 MB/s 50%
CMS WAN at CERN = ~2x10 Gbps
CMS Tier0 Resources
11P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
Tier0: dettagli risorseTier0: dettagli risorse
Total T0 Tapes: 3775 TBRaw = 2250 TBHIRaw = 350 TBCalib = 225 TB1stReco = 375 TB2ndReco = 375 TBHIReco = 50 TB1stAOD = 75 TB2ndAOD = 75 TB
2007 Performance EstimatesPerfCPU Performance per CPU 4 kSI2kNCPU Number of CPUs per Box 2PerfDisk GB per Disk 900 GB
Total T0 CPU: 4588 kSI2K(EffSchCPU =85%)Raw 1stReco = 3750 kSI2KCalib = 150 kSI2KRaw 2ndReco = included above
12P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
CM - Data and Tier1 activitiesCM - Data and Tier1 activitiesRiceve il suo share di RAW+RECO (custodial data) + tutti gli AOD dal Tier0
Custodial data anche archiviati su nastroRiceve i RAW+RECO+AOD di simulazione dai suoi Tier2
Anche archiviati su nastro Distribuisce gli AOD simulati a tutti i Tier1
Manda i RECO+AOD concordati ai suoi Tier2Riprocessa la ri-ricostruzione concordata sui suoi RAW/RECO (real+simu)
Manda i new-AOD agli altri Tier1 e ai Tier2, e li riceve dagli altri Tier1Partecipa alle “calibrazioni”Esegue le “large scale Physics Stream skims” dei ~ 10 “Physics Groups” che usano i dati residenti
Il risultato e’ mandato ai Tier2 di competenza per l’analisi Full pass sui RECO (data+MC) in ~2 giorni ogni gruppo ogni ~3 settimane
Supporta un “limitato” accesso (interattivo e batch) degli users
Eff FactorsCPU scheduled 1199 kSI2K 85.00%CPU analysis 929 kSI2K 75.00%Disk 1121 Tbytes 70.00%Active tape 1837 Tbytes 100.00%Data Serving I/O Rate 800 MB/sCMS WAN at each Tier1 = ~10 Gbps
CMS Tier1
Resources
13P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
Tier1: dettagli delle risorseTier1: dettagli delle risorseTotal T1 Disks: 1121 TB(EffDisk =70%)Raw data = 375 TB1stReco (curr vers) = 63 TB2ndReco (old vers) (10% disk) = 13 TB2ndReco Sim (old vers) (10%) = 17 TBSimul Raw (10% disk) = 43 TBSimul Reco (10% disk) = 9 TB1stAOD (data & sim)(curr vers) = 150 TB2ndAOD (old vers) (10% disk) = 30 TBCalib data = 38 TBHIReco (10% disk) = 6 TBAnalyses Group space = 43 TB
Total T1 CPU: 2128 kSI2K[CPU scheduled + CPU analysis](EffSchCPU =85%) (EffAnalCPU = 75%)Re Reco Data = 510 kSI2KRe Reco Sim = 510 kSI2K Calib = 25 kSI2KAnalyses skims = 672 kSI2K
Two days per Group per Tier1
Data I/O Rate ≈ 800 MB/s Local (Sim+Data) Reco
Full Sample size (Tapes)
/ TwoDay
14P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
CM - Data and Tier2 activitiesCM - Data and Tier2 activitiesServe l’analisi per i “local Groups”
20-50 users, 1-3 Gruppi? “Local” non necessariamente “geografico” Fornisce il supporto storage per i “local Groups” e le sim private Ogni user analizza ogni 2 giorni 1/10 degli AOD e 1/10 dei RECO
residenti Sviluppo software locale Accesso al sistema degli users (User Interface)
Importa i dataset (RECO+AOD+ skims) dai Tier1 Una volta ogni 3 settimane
Produce ed esporta i dati di simulazione (sim-RAW + RECO + AOD) Non una responsabilita’ locale: eseguita centralmente via GRID
Puo’ analizzare e produrre le “calibrazioni” Di interesse/responsabilita’ della comunita’ che accede al Tier2
Puo’ partecipare alla ricostruzione ed analisi degli HI data
Eff FactorsCPU scheduled 250 kSI2K 85.00%CPU analysis 579 kSI2K 75.00%Disk 218 Tbytes 70.00%
CMS WAN at each Tier2 = ~1 Gbps
CMS Tier2 Resources
15P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
2008 2009 2010MCHF 20 20 20
CM - Computing SummaryCM - Computing Summary
CMS Italia: Tier1 (2.9 MCMS Italia: Tier1 (2.9 M€) + 6 Tier2 (0.6x6=3.6 M€) + 6 Tier2 (0.6x6=3.6 M€) = ~6.5 M€€) = ~6.5 M€
Annual expenditures Annual expenditures CMS Italia: ~1.9 M€/yearCMS Italia: ~1.9 M€/year
CERN investment:CERN investment:
Tier0 + un Tier1 Tier0 + un Tier1
+ un Tier2+ un Tier2
Costi valutati con Costi valutati con “criteri CERN” e “criteri CERN” e proiettati al 2007proiettati al 2007
16P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
CM - Open IssuesCM - Open IssuesSoftware & Framework
Non incluso nel Computing Model, dovra’ essere nel C-TDRTools e servizi nei vari Tiers
Non incluso nel CM, dovra’ esserlo nel C-TDR (e LCG-TDR)Sviluppo software (e middleware)
Non incluso nel CM, dovra’ esserlo nel C-TDR (e LCG-TDR)Location and implementation of needed services
Non incluso nel CM, dovra’ esserlo nel C-TDR (e LCG-TDR) + MoUsLevel of services agreement nei Tiers
Non incluso nel CM, dovra’ esserlo nel C-TDR (e LCG-TDR) + MoUsPersonale
Non incluso nel CM, dovra’ esserlo nel C-TDR (e LCG-TDR) + MoUsRuolo e consistenza dei “Physics Groups”
Non c’e’ nel modello proposto, non in modo esplicito (C-TDR?)Flusso dei dati “tra” Tier1 e “tra” Tier2
Non c’e’ nel modello proposto, dovra’ essere nel C-TDRI “non event data” sono poco trattati …La distribuzione dei dati potrebbe essere diversa all’inizio (2007/8)Etc.
17P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
Commenti e prospettiveCommenti e prospettiveAttuali candidati Tier1 per CMS:
{CERN}, USA(FNAL), Italy (CNAF), France (Lyon), Germany (FZK), UK (RAL), Spain (PIC), Taipei, [Russia?]
Percentuale di contributo expected: {CERN 8%}, USA 36%, Italy 20%, France 7%, Germany 6%, UK 5%, Spain 3%,
Taipei 1%, [Russia? 14%]
Attuali candidati Tier2: USA, ~7 Universities + LHC Physics Center a FNAL INFN, 6 sezioni (±1) In2p3, nessuno? DDF, ? UK, 3-4 sites? Es, 2-3 sites? Others?
Il CM proposto da sempre da CMS Italia non e’ dissimile da questo Un po’ piu’ importanza ai Tier2/3 e a Grid: attraverso risorse umane
localmente interessate e investimento hardware/infrastruttura/organizzazione
Test e verifica del CM: non solo attraverso il P-TDR! Analisi distribuita (via LCG) per il P-TDR gia’ da ora Service challenge di LCG entro 2005 Attivita’ di sviluppo software e commitments di lungo periodo
18P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
TempiTempi
Febbraio, CMS Produzione DST ex-DC04 over
Aprile, RRB Approvazione dei MoUs: LCG (fase 2) ed Esperimenti
Giugno, CMS Sottomissione del C-TDR (e LCG-TDR) a LHCC
Giugno?, CMS Prototipo funzionante di analisi DST
Autunno?, CSN1 Discussione C-TDRs
Ottobre, RRB Ancora MoUs ?
Dicembre, CMS Sottomissione del P-TDR
Nel frattempo bisogna implementare l’infrastruttura Nel frattempo bisogna implementare l’infrastruttura
hardware e software per produzione ed analisihardware e software per produzione ed analisi
(anche attraverso work-around solutions)(anche attraverso work-around solutions)
19P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
ScheduleScheduleMonday CMS Meetings C-TDR Status C-TDR Reviews / Approval
6-DecDraft 0:
Basic Computing ModelCMS approval of CM document for LHCC review
13-Dec Draft 0 sent to LHCC 20-Dec27-Dec
3-Jan10-Jan
17-Jan TCMDraft 1:
Complete outline / authorsLHCC Review
of basic Computing Model / Resources24-Jan SC FB (1)
31-Jan Referees Tracker
7-Feb C-TDR mini-Workshop #114-Feb TCM
21-Feb SC
28-Feb TCMDraft 2:
First complete (rough) draft7-Mar Referees
14-Mar C-TDR mini-Workshop #2 CMS C-TDR review, part 1:Physics Model (requirements)
21-Mar28-Mar
4-AprDraft 3:
Complete but not polishedCMS approval of M&O manpower for RRB
11-Apr TCM
18-Apr RRB (18) ECALCMS C-TDR review, part 2:
Computing and Software
25-Apr SC TrackerCMS management option
Keep to submission schedule or delay?
2-May TCMCMS C-TDR review, part 3:
Costs, management plan, milestones…9-May Referees Elec week TriDAS
16-May
23-May SC FB (24)Draft 4:
Final versionCMS management option
C-TDR ready to request approval?30-May6-Jun
13-JunCMS approval of CMS (& LCG) TDR's
Submission to LHCC20-Jun
27-Jun Referees Run Meet(1)
Run Meet(11)
Run Meet(3)
CPT week
CMS Week
MB/FB(RRB)
CMS Phys. Week (FNAL)
Editorial work,technical and cost updates
CMS Annual Review
CMS Annual Review
CMS Week
2004
2005
20P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
Back-up slidesBack-up slides
21P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
Cost EvolutionCost EvolutionThis plan is for “2008” run (First major run)
Systems must be ramped up in 2006 and 2007 Established centers (CERN, FNAL, Lyon, RAL) could ramp latish New centers have to ramp manpower as well, not leave to late Some capacity required in 2007
Subsequent years Operations costs (Mostly Tape) Upgrade/Maintenance
Replace 25% “Units” each year. 3-4 years maximum lifetime of most components
Moores law give steady upgrade During next year, last years data becomes (over the year) mostly staged in rather
than on disk Luminosity upgrades need more CPU and more disk
22P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
Tier2: dettagli delle risorseTier2: dettagli delle risorse
Total T2 Disks: 218 TB(EffDisk =70%)1stReco (curr vers) (~5 prim datasets) = 19 TB1stReco Sim (curr vers) (~5 prim datasets) = 19 TB1stAOD (data & sim)(curr vers) = 15 TBAnalyses Group space = 40 TBLocal priv simul data = 60 TB
Total T2 CPU: 829 kSI2K[CPU scheduled + CPU analysis](EffSchCPU =85%) (EffAnalCPU = 75%)Simul = 128 kSI2KReco Sim = 71 kSI2K HI Reco = 38 kSI2KAOD Analyses = 217 kSI2KReco Analyses = 217 kSI2K
Each Group in Twenty days
All local data in Twenty days
23P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
C-TDR WORKING GROUPS (DRAFT)C-TDR WORKING GROUPS (DRAFT)
1. Physics input: Analysis, Data, Event Models Event / Data model, streams, data flow, processing, calibration, …
2. Computing Model: Key Features and Top-Level Architecture Analysis model, groups, users … Role of LCG and Grid components
3. Core Applications Software and Environment Architecture of software, software principles and development process Development environment and tools Applications framework, persistency, metadata... Toolkits : utilities, plug-ins, mathlibs, graphics, technology choices…...
4. Computing Services and System Operations Tier-0, Tier-1’s, Tier-2’s, local systems, networks (Multiple) Grids – expectations (e.g. LCG), fallback solutions Data Management and Database Systems Distributed (job) processing systems Final data challenge (“Computing Ready for Real Data”)
5. Project Management and Resources Size and costs: CPU, disk, tape, network, services, people Proposed computing organisation, plans, milestones Human resources and communications Risk management
First order iteration
done with “Computing Model”
paper
First order iteration
done with “Computing Model”
paper
Conveners Italiani????Conveners Italiani????
e contributors?e contributors?
Task force italiana!Task force italiana!
LH
CC
LH
CC
Revie
w o
f C
om
pu
tin
g
Revie
w o
f C
om
pu
tin
g
Resou
rces f
or
the L
HC
R
esou
rces f
or
the L
HC
exp
eri
men
tsexp
eri
men
ts
David Stickland
Jan 2005
Page 24
Grids
We expect, at least initially, to manage data location by CMS decisions and tools
– CMS physicists (services) can determine where to run their jobs • Minimize requirements on “Resource Brokers”
– CMS with Tier centers manages local file catalogs– Minimize requirements for global file catalogs
• Except at dataset level
“Local” users or “Limited set of users” submitting jobs on a given Tier-2
– Tier-2’s don’t have to publish globally what data they have, or be open to a wide range of CMS physicists
– But Simulation production runs there using grid tools
For Major Selection and processing (most) physicists use a GRID UI to submit jobs on T1 centers
– Maybe from their Institute or Tier-2– Some local users at Tier-1 using also local batch systems
LH
CC
LH
CC
Revie
w o
f C
om
pu
tin
g
Revie
w o
f C
om
pu
tin
g
Resou
rces f
or
the L
HC
R
esou
rces f
or
the L
HC
exp
eri
men
tsexp
eri
men
ts
David Stickland
Jan 2005
Page 25
Computing at CERN
The Online Computing at CMS Cessy
The CMS Tier-0 for primary reconstruction
A CMS Tier-1 center– Making use of the Tier-1 archive, but requiring its own drives/stage
pools• Thus can be cheaper than an offsite Tier-1
CMS Tier-2 Capacity for CERN based analysis– Estimate need equivalent of 2-3 canonical CMS T2 centers at CERN
The CMS CERN Tier-1 and Tier-2 centers can share some resources for economy and performance, and provide a very important analysis activity also at CERN
– Have not studied this optimization yet
26P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
Tier3sTier3s
Pag 37 (Specifications, overview) Tier-3 Centres are modest facilities at institutes for local use. Such
computing is not generally available for any coordinated CMS use but is valuable for local physicists. We do not attempt at this time to describe the uses or responsibilities of Tier-3 computing. We nevertheless expect that significant, albeit difficult to predict, resources may be available via this route to CMS.
Pag 50 (Tier2 roles) All Monte Carlo production is carried out at Tier-2 (And Tier-3)
27P. Capiluppi - CSN1 - Roma 31 Gennaio 2005
GridsGridsPag 3 (executive Summary)
…GRID Middleware and Infrastructure must make it possible…via GRID middleware…designs of GRID middleware…in local GRID implementations…
Pag 24 (RAW event rates) …a figure that could be reasonably accommodated by the computing systems that are currently
being planned in the context of the LHC Computing Grid (LCG).Pag 32 (Analysis Model)
…significant changes if/as we become convinced that new methods of for example Grid-based analysis are ready for full scale deployment.
Pag 35 (Middleware and software) …we do not describe the Grid middleware nor the applications software in detail in this document. …
”then a full page on Grid”. Requirement 33: Multiple GRID implementations are assumed to be a fact of life. They must be
supported in a way that renders the details largely invisible to CMS physicists. Requirement 34: The GRID implementations should support the movement of jobs and their
execution at sites hosting the data, …Pag 36 (Specifications of CM, overview)
We expect this ensemble of resources to form the LHC Computing Grid. We use the term LCG to define the full computing available to the LHC (CMS) rather than to describe one specific middleware implementation and/or one specific deployed GRID.We expect to actually operate in a heterogeneous GRID environment but we require the details of local GRID implementations to be largely invisible to CMS physicists (these are described elsewhere, e.g.: LCG-2 Operations [9]; Grid-3 Operations [10]; EGEE [11]; NorduGrid [12]; Open Science Grid [13]. … “then a couple of sentences about Grid in CMS”.
Pag 45 (Tier1 reprocessing) … we believe this would place unnecessarily high demands on the Grid infrastructure in the early
days …Pag 50 (Tier2 data processing)
…the ability to submit jobs locally directly or via Grid interfaces, and ability to submit (Grid) jobs to run at Tier-1 centres …