Upload
lola
View
52
Download
0
Tags:
Embed Size (px)
DESCRIPTION
MONARC Plenary Status, Simulation Progress and Phase3 Letter of Intent Harvey B. Newman (CIT) CERN, December 9, 1999. MONARC Pleanary December 9 Agenda. Introductions HN, LP15’ Status of Actual CMS ORCA databases and relationship to MONARC Work: HN - PowerPoint PPT Presentation
Citation preview
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
MONARC PlenaryMONARC Plenary
Status, Simulation Progress and Status, Simulation Progress and Phase3 Letter of IntentPhase3 Letter of Intent
Harvey B. Newman (CIT)Harvey B. Newman (CIT)CERN, December 9, 1999CERN, December 9, 1999
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
MONARC PleanaryMONARC PleanaryDecember 9 AgendaDecember 9 Agenda
IntroductionsIntroductions HN, LP HN, LP 15’15’ Status of Actual CMS ORCA databases and Status of Actual CMS ORCA databases and
relationship to MONARC Work: HNrelationship to MONARC Work: HN Working Group Reports (by Chairs or Designees)Working Group Reports (by Chairs or Designees) 40’40’ Simulation Reports: Recent Progress AN, LP Simulation Reports: Recent Progress AN, LP 30’30’ Discussion Discussion 15’15’ Regional Centre Progress: France, Italy, UK, Regional Centre Progress: France, Italy, UK, 45’45’
US, Russia, Hungary; OthersUS, Russia, Hungary; Others Tier2 Centre Concept and GriphyN: HN Tier2 Centre Concept and GriphyN: HN 10’10’ Discussion of Phase 3Discussion of Phase 3 30’30’ Steering GroupSteering Group 30’30’
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
To Solve: the HENP To Solve: the HENP “Data Problem”“Data Problem”
While the proposed future computing and data handling facilities While the proposed future computing and data handling facilities are large by present-day standards,are large by present-day standards,
They will not support FREE access, transport or reconstruction They will not support FREE access, transport or reconstruction for more than a Minute portion of the data.for more than a Minute portion of the data.
Need effective global strategiesNeed effective global strategies to handle and prioritise to handle and prioritise requests requests (based on both policies and marginal utility)(based on both policies and marginal utility)
Strategies must be studied and prototyped, to ensure Strategies must be studied and prototyped, to ensure Viability:Viability: acceptable turnaround times; efficient resource utilization acceptable turnaround times; efficient resource utilization
Problem to be Explored in Phase 3; Problem to be Explored in Phase 3; How to Use How to Use Limited Limited Resources to Resources to
Meet the demands of hundreds of users who need “transparent” (or Meet the demands of hundreds of users who need “transparent” (or adequate) access to local and remote data, in disk caches and tape storesadequate) access to local and remote data, in disk caches and tape stores
Prioritise hundreds to thousands of requests from local and remote Prioritise hundreds to thousands of requests from local and remote communitiescommunities
Ensure that the system is dimensioned “optimally”Ensure that the system is dimensioned “optimally”
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
Phase 3 Letter of IntentPhase 3 Letter of Intent
Short: Two to Three PagesShort: Two to Three Pages May Refer to MONARC Internal Notes to Document ProgressMay Refer to MONARC Internal Notes to Document Progress
Suggested Format: Similar to PEP ExtensionSuggested Format: Similar to PEP Extension Motivations for a Common ProjectMotivations for a Common Project Goals and Scope of the ExtensionGoals and Scope of the Extension ScheduleSchedule Equipment NeedsEquipment Needs Relationship to Other ProjectsRelationship to Other Projects
Computational Grid ProjectsComputational Grid Projects US other National Funded efforts with R&D componentsUS other National Funded efforts with R&D components
Submitted to Whom ?Submitted to Whom ? Suggest to CERN/IT and Hoffmann PanelsSuggest to CERN/IT and Hoffmann Panels
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
MONARC Phase 3: Justification (1)MONARC Phase 3: Justification (1)
General: TIMELINESS and USEFUL IMPACTGeneral: TIMELINESS and USEFUL IMPACT Facilitate the efficient planning and design of Facilitate the efficient planning and design of mutually mutually
compatible compatible site and network architectures, and servicessite and network architectures, and services Among the experiments, the CERN Centre Among the experiments, the CERN Centre
and Regional Centresand Regional Centres Provide Provide modelling consultancy and servicemodelling consultancy and service to the to the
experiments and Centresexperiments and Centres Provide a Provide a core of advanced R&D activitiescore of advanced R&D activities, aimed at LHC , aimed at LHC
computing system optimisation and production prototyping computing system optimisation and production prototyping Take advantage of workTake advantage of work on distributed data-intensive computing on distributed data-intensive computing
for HENP this year for HENP this year in other “next generation” projectsin other “next generation” projects [*] [*] For example in US: “Particle Physics Data Grid” (PPDG) of DoE/NGI;For example in US: “Particle Physics Data Grid” (PPDG) of DoE/NGI; + “Joint “GriPhyN” proposal on Computational Data Grids by + “Joint “GriPhyN” proposal on Computational Data Grids by
ATLAS/CMS/LIGO/SDSS. Note EU Plans as well. ATLAS/CMS/LIGO/SDSS. Note EU Plans as well.
[*] [*] See H. Newman, http://www.cern.ch/MONARC/progress_report/longc7.htmlSee H. Newman, http://www.cern.ch/MONARC/progress_report/longc7.html
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
MONARC Phase 3 Justification (2)MONARC Phase 3 Justification (2)
More Realistic Computing Model Development More Realistic Computing Model Development (LHCb and Alice Notes)(LHCb and Alice Notes)
Continue to Review Key Inputs to the ModelContinue to Review Key Inputs to the Model CPU Times at Various PhasesCPU Times at Various Phases Data Rate to StorageData Rate to Storage Tape Storage: Speed and I/OTape Storage: Speed and I/O
Develop Use Cases Based on Actual Reconstruction Develop Use Cases Based on Actual Reconstruction and Physics Analysesand Physics Analyses
Technology Studies - Data Model DependenciesTechnology Studies - Data Model Dependencies Data structuresData structures Restructuring and transport operationsRestructuring and transport operations Caching, migration, etc.Caching, migration, etc.
Confrontation of Models with Realistic Prototypes; Confrontation of Models with Realistic Prototypes; Use Cases at every stageUse Cases at every stage
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
MONARC Phase 3: Justification (3)MONARC Phase 3: Justification (3)
Meet Near Term Milestones for LHC ComputingMeet Near Term Milestones for LHC Computing For example CMS Data Handling Milestones: ORCA4: March 2000For example CMS Data Handling Milestones: ORCA4: March 2000
~1 Million event fully-simulated data sample(s) ~1 Million event fully-simulated data sample(s) Simulation of data access patterns, and mechanisms usedSimulation of data access patterns, and mechanisms used
to build and/or replicate compact object collectionsto build and/or replicate compact object collections Integration of database and mass storage useIntegration of database and mass storage use
(including caching/migration strategy for limited disk space)(including caching/migration strategy for limited disk space) Other milestones will be detailed, and/or brought forward to meetOther milestones will be detailed, and/or brought forward to meet
the actual needs for HLT Studies and the TDRs for thethe actual needs for HLT Studies and the TDRs for theTrigger, DAQ, Software and Computing and PhysicsTrigger, DAQ, Software and Computing and Physics
Event production and and analysis must be spread amongstEvent production and and analysis must be spread amongstregional centres, and candidatesregional centres, and candidates
Learn about RC configurations, operations, network bandwidth,Learn about RC configurations, operations, network bandwidth,by modeling real systems, and analyses actually with by modeling real systems, and analyses actually with
Feedback information from real operations into simulationsFeedback information from real operations into simulations Use progressively more realistic models to develop futureUse progressively more realistic models to develop future
strategies strategies
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
MONARC: Computing Model MONARC: Computing Model Constraints Drive StrategiesConstraints Drive Strategies
Latencies and Queuing DelaysLatencies and Queuing Delays Resource Allocations and/or Advance ReservationsResource Allocations and/or Advance Reservations Time to Swap In/Out Disk SpaceTime to Swap In/Out Disk Space Tape Handling Delays: Get a Drive, Find a Volume, Tape Handling Delays: Get a Drive, Find a Volume,
Mount a Volume, Locate File, Read or Write Mount a Volume, Locate File, Read or Write Interaction with local batch and device queues Interaction with local batch and device queues Serial operations: tape/disk, cross-network, disk-diskSerial operations: tape/disk, cross-network, disk-disk
and/or disk-tape after network transferand/or disk-tape after network transfer NetworksNetworks
Useable fraction of bandwidth (Congestion, Overheads): 30-60% (?)Useable fraction of bandwidth (Congestion, Overheads): 30-60% (?)Fraction for event-data transfers: 15-30% ?Fraction for event-data transfers: 15-30% ?
Nonlinear throughput degradation on loaded or poorly configuredNonlinear throughput degradation on loaded or poorly configurednetwork paths.network paths.
Inter-Facility PoliciesInter-Facility Policies Resources available to remote usersResources available to remote users Access to some resources in quasi-real timeAccess to some resources in quasi-real time
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
MONARC Phase 2 to 3: MONARC Phase 2 to 3: Implementation StepsImplementation Steps
(1) Set Limits and Constraints(1) Set Limits and Constraints At each SiteAt each Site Inter-Facility: Allocations and priorities to remote usersInter-Facility: Allocations and priorities to remote users
(2) System Description: Workloads, network, priorities(2) System Description: Workloads, network, priorities
(3) Develop Subsystem and Sub-workload Implementations (3) Develop Subsystem and Sub-workload Implementations
of Interestof Interest Use Cases for Re-Reconstruction and AnalysisUse Cases for Re-Reconstruction and Analysis Distributed data access (e.g. databases)Distributed data access (e.g. databases) Caching and replication strategiesCaching and replication strategies
(4) More Realistic Infrastructure(4) More Realistic Infrastructure Network behaviorsNetwork behaviors Redirection of requestsRedirection of requests Queueing bottlenecks, and system responses Queueing bottlenecks, and system responses Transaction managementTransaction management Interactions of local queue managers with the “job” (or agent)Interactions of local queue managers with the “job” (or agent) Resource (data, CPU, network) “discovery”Resource (data, CPU, network) “discovery”
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
MONARC Phase 2-3: The first step MONARC Phase 2-3: The first step is limits and constraintsis limits and constraints
Revisit key parameters (CPU, Disk, Tape)Revisit key parameters (CPU, Disk, Tape) Set Reasonable RangesSet Reasonable Ranges
Define reasonable range for technology evolutionDefine reasonable range for technology evolution Make sure all main “wait” states and bottlenecks are includedMake sure all main “wait” states and bottlenecks are included
A high speed device, or space may (often) be occupiedA high speed device, or space may (often) be occupied Define limits, quotas, prioritiesDefine limits, quotas, priorities Work in the Range of Work in the Range of Limited Limited ResourcesResources
How much work is done, or how long it takes to get it doneHow much work is done, or how long it takes to get it done Set queue-length attention-span-related limitsSet queue-length attention-span-related limits
Include Non-event-related competition for resourcesInclude Non-event-related competition for resources User profiles for networksUser profiles for networks Interference from some major system operations:Interference from some major system operations:
e.g. backup processese.g. backup processes
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
MONARC Phase 3: The second MONARC Phase 3: The second step is system descriptionstep is system description
Make sure all main tasks, and “background loads” are includedMake sure all main tasks, and “background loads” are included Make sure all main resources are included: e.g. desktopsMake sure all main resources are included: e.g. desktops Workload Management representationWorkload Management representation
Multiple Queues for different tasksMultiple Queues for different tasks Performance classes (number of simultaneous jobs)Performance classes (number of simultaneous jobs)
NetworksNetworks Performance classes (bandwidth limit by task)Performance classes (bandwidth limit by task) Competition from other usage (user profiles)Competition from other usage (user profiles) Performance/load characteristics: degradation under loadsPerformance/load characteristics: degradation under loads
Priority SchemesPriority Schemes Relative Priorities for different tasksRelative Priorities for different tasks Conditions for priority modificationsConditions for priority modifications Policies; marginal utilityPolicies; marginal utility What to do if system is overloaded ?What to do if system is overloaded ?
What to do if a quota is exceeded What to do if a quota is exceeded ? ?
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
MONARC Phase 3: Guidelines MONARC Phase 3: Guidelines ThroughoutThroughout
Prototyping: “Large” and small prototype systemsPrototyping: “Large” and small prototype systems Do Not Miss ConstraintsDo Not Miss Constraints Do not miss basic complications: e.g. network linksDo not miss basic complications: e.g. network links
do not scale with the number of experiments at one RC do not scale with the number of experiments at one RC Interaction with experiments’ management:Interaction with experiments’ management:
as some negative or controversial info. appears as some negative or controversial info. appears Interaction with software core: especially where thereInteraction with software core: especially where there
are real or potential impacts on OO architecture and code. are real or potential impacts on OO architecture and code.
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
US CMS Critical Path Items for 2000US CMS Critical Path Items for 2000
Baseline the Software and Computing Activity Baseline the Software and Computing Activity as a Projectas a Project
Establish US-Based Software Effort Establish US-Based Software Effort Begin Design, Prototyping and Initial Service ofBegin Design, Prototyping and Initial Service of
the Major US Regional Center at Fermilab the Major US Regional Center at Fermilab Design/Develop the LHC Computing Model, including Design/Develop the LHC Computing Model, including
the networked Object Database systemsthe networked Object Database systems Verify of CMS’ Full Range of Physics Discovery Verify of CMS’ Full Range of Physics Discovery
Potential Based onPotential Based on OO Reconstruction (ORCA) OO Reconstruction (ORCA) Physics Object Reconstruction Groups Physics Object Reconstruction Groups
(JMET, MUON, e/gamma)(JMET, MUON, e/gamma)
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
US LHC Software and ComputingUS LHC Software and Computing
SCOPE of the PLANSSCOPE of the PLANS A formal project supported by DoE/NSFA formal project supported by DoE/NSF Well defined scope, cost and scheduleWell defined scope, cost and schedule Clear management structure with “line of authority”Clear management structure with “line of authority” Oversight by host laboratories (FNAL/CMS; BNL/ATLAS)Oversight by host laboratories (FNAL/CMS; BNL/ATLAS)
Lab directors report to the “Joint Oversight Group” of DoE/NSFLab directors report to the “Joint Oversight Group” of DoE/NSF US CMS and US ATLAS Each Plan, by 2005:US CMS and US ATLAS Each Plan, by 2005:
20 - 25 M$ for Tier1 Hardware and Staff20 - 25 M$ for Tier1 Hardware and Staff > 10 M$ for Software Engineers> 10 M$ for Software Engineers > 10 M$ for Tier2 Hardware and Staff> 10 M$ for Tier2 Hardware and Staff
Recurring Costs Each ~$13 M Annually, from 2006;Recurring Costs Each ~$13 M Annually, from 2006; Including $ 3-4M for Tier2 Including $ 3-4M for Tier2
Before Baselining: FY2000 Requests ~$ 2 M EachBefore Baselining: FY2000 Requests ~$ 2 M Each
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
US CMS Software and Computing US CMS Software and Computing ProjectProject
Steps Towards Project StartupSteps Towards Project Startup
July July : Letter from JoG to FNAL Director : Letter from JoG to FNAL Director August : August : Formation of ASCBFormation of ASCBSeptember :September : Report from May ReviewsReport from May ReviewsNovemberNovember :: Memo from Jim Yeck on “Projectization” Memo from Jim Yeck on “Projectization” January 2000: “Peer Review” January 2000: “Peer Review” Summer 2000: Summer 2000: Baselining ReviewBaselining Review
Progress on the MAJOR ELEMENTSProgress on the MAJOR ELEMENTS Project Organization Plan Project Organization Plan PMPPMP Core Applications Software Subproject } S&CCore Applications Software Subproject } S&C User Facilities Subproject User Facilities Subproject } Plan} Plan
US CMS (and US ATLAS) Projects PlannedUS CMS (and US ATLAS) Projects Planned to be Baselined by Fall 2000 to be Baselined by Fall 2000
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
US CMS Software and Computing PMPUS CMS Software and Computing PMP
US CMS Collaboration
Project Management Plan for theUSCMS Software and Computing
Project
November 17, 1999J. Butler et al.
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
US CMS S&C SubprojectsUS CMS S&C Subprojects
Core Application Software Subproject Core Application Software Subproject [L. Taylor; I. Willers][L. Taylor; I. Willers]
Resource-loaded WBS for CMS and US-CMSResource-loaded WBS for CMS and US-CMS Task-Oriented Requirements: Infrastructure, R&D, US SupportTask-Oriented Requirements: Infrastructure, R&D, US Support US part of software engineers: 7 FTEs by end 1999; to US part of software engineers: 7 FTEs by end 1999; to
13 FTEs by 2004. Includes ~30% for US-specific support 13 FTEs by 2004. Includes ~30% for US-specific support User Facilities Subproject User Facilities Subproject [V. O’Dell; MK][V. O’Dell; MK]
including the US Major Regional Center including the US Major Regional Center WBS; Detailed Tasks and Schedules through 2000WBS; Detailed Tasks and Schedules through 2000 Implement R&D and Prototype Systems: 1999-2002 Implement R&D and Prototype Systems: 1999-2002
Preproduction ODBMS and Event-distribution systemsPreproduction ODBMS and Event-distribution systems Implement Production Systems in 2003-2005Implement Production Systems in 2003-2005 Replenish and Upgrade from 2006 -Replenish and Upgrade from 2006 - Staff: 35 FTEs by 2003Staff: 35 FTEs by 2003
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
DoE/NSF JoG: 11/99 Memo on LHC ComputingDoE/NSF JoG: 11/99 Memo on LHC Computing
Subject: Subject: U.S. LHC SOFTWARE & COMPUTING PROJECTSU.S. LHC SOFTWARE & COMPUTING PROJECTSActions required to launch the U.S. ATLAS and U.S. CMS software and computing Actions required to launch the U.S. ATLAS and U.S. CMS software and computing
projects.projects.1.1. FY 2000 initial funding request (J. Huth and M. Kasemann)FY 2000 initial funding request (J. Huth and M. Kasemann) 9/999/992.2. DOE and NSF initial FY00 funding allocations (P.K. Williams/M. Goldberg)DOE and NSF initial FY00 funding allocations (P.K. Williams/M. Goldberg) 10/9910/993.3. DOE/NSF Amended MOU approved (T. Toohig lead)DOE/NSF Amended MOU approved (T. Toohig lead) 10/9910/994.4. U.S. LHC Project Execution Plan Revised/Approved (J. Yeck lead)U.S. LHC Project Execution Plan Revised/Approved (J. Yeck lead) 11/9911/995.5. S&C Project Management Plans to DOE/NSF (T. Kirk, K. Stanfield)S&C Project Management Plans to DOE/NSF (T. Kirk, K. Stanfield) 12/9912/996.6. FY 2000 full funding requests (J. Huth and M. Kasemann)FY 2000 full funding requests (J. Huth and M. Kasemann) 12/9912/997.7. Technical Peer Review of Plans/Progress (P.K. Williams)Technical Peer Review of Plans/Progress (P.K. Williams) 1/001/008.8. DOE/NSF FY 00 final funding allocations (P.K. Williams/M. Goldberg)DOE/NSF FY 00 final funding allocations (P.K. Williams/M. Goldberg) 2/002/009.9. DOE/NSF approve reference funding profiles (J. O'Fallon/J. Lightbody)DOE/NSF approve reference funding profiles (J. O'Fallon/J. Lightbody) 2/002/0010.10. S&C Project Management Plans approved (J. O'Fallon/J. Lightbody)S&C Project Management Plans approved (J. O'Fallon/J. Lightbody) 3/003/0011.11. DOE/NSF Project Baseline Reviews (JOG charge/T. Toohig lead)DOE/NSF Project Baseline Reviews (JOG charge/T. Toohig lead) 7-8/007-8/0012. U.S. ATLAS and U.S. CMS Project Baselines approved (JOG)12. U.S. ATLAS and U.S. CMS Project Baselines approved (JOG) 9/009/00
The proposed schedule for these actions should result in established The proposed schedule for these actions should result in established project organizations and approved baselines by start of FY2001project organizations and approved baselines by start of FY2001
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
US Review of CMS and ATLAS ComputingUS Review of CMS and ATLAS Computing
The primary purpose of this review is to assess the collaborations’ readiness to The primary purpose of this review is to assess the collaborations’ readiness to proceed to the next stage in their projects and to identify key areas which may proceed to the next stage in their projects and to identify key areas which may need additional attention. Specifically, the review committee should evaluate:need additional attention. Specifically, the review committee should evaluate:
The overall strategy and scope of the U.S. software and computing The overall strategy and scope of the U.S. software and computing efforts, and their relationship to the plans of the international community;efforts, and their relationship to the plans of the international community;
The proposed designs of the U.S. ATLAS and U.S. CMS computing The proposed designs of the U.S. ATLAS and U.S. CMS computing facilities;facilities;
The realism of the proposed schedules; The realism of the proposed schedules; The adequacy of the long-term funding profiles proposed by the The adequacy of the long-term funding profiles proposed by the
collaborations;collaborations; The commonalities between the U.S. ATLAS and U.S. CMS software and The commonalities between the U.S. ATLAS and U.S. CMS software and
computing plans and the experiments’ plans to seek common computing plans and the experiments’ plans to seek common approaches to common problems;approaches to common problems;
The appropriateness of the management structures and the Project The appropriateness of the management structures and the Project Management Plans presented by the collaborations; andManagement Plans presented by the collaborations; and
The schedules of work and cost estimates for the coming year.The schedules of work and cost estimates for the coming year.
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
““Hoffmann” Review of LHC ComputingHoffmann” Review of LHC Computing
Review of the progress and planning of the computing efforts Review of the progress and planning of the computing efforts of CERN (IT) and of the LHC experiments for LHC startupof CERN (IT) and of the LHC experiments for LHC startup Understanding of Technical RequirementsUnderstanding of Technical Requirements Management StructuresManagement Structures
Review Chair: Siggi Bethke (MPI, Atlas)Review Chair: Siggi Bethke (MPI, Atlas) Will set up the mandate and Technical Panels, with HFHWill set up the mandate and Technical Panels, with HFH
Technical Panels and Proposed Chairs: (Each chair to Technical Panels and Proposed Chairs: (Each chair to propose the program of work of their panel):propose the program of work of their panel): Worldwide analysis/computing model:Worldwide analysis/computing model:
how the analysis is donehow the analysis is done Linglin (CCIN2P3)Linglin (CCIN2P3) Software: design and development:Software: design and development: Kasemann (FNAL)Kasemann (FNAL) Management and Resource PlanningManagement and Resource Planning Calvetti (INFN) Calvetti (INFN)
Steering CommitteeSteering Committee Review Chair, Technical Panel Chairs, HRH, RC, Review Chair, Technical Panel Chairs, HRH, RC,
Experiment and IT RepresentivesExperiment and IT Representives
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
Hoffmann Computing Review ScheduleHoffmann Computing Review Schedule
Goal: Agree what is needed for LHC computing,Goal: Agree what is needed for LHC computing, including CERN and outside including CERN and outside
Review will cover CERN/IT Division, as well as allReview will cover CERN/IT Division, as well as allthe LHC experimentsthe LHC experiments IT will write a “Technical Proposal” of their plans IT will write a “Technical Proposal” of their plans
From CMS:From CMS: Two representatives to each panelTwo representatives to each panel Two or three representatives to the Steering GroupTwo or three representatives to the Steering Group
Timescale:Timescale: Mid-2000: First Report from the reviewMid-2000: First Report from the review In 2000:In 2000: Resource loaded work plans Resource loaded work plans In 2001: In 2001: Computing MOU’s; commitments of Computing MOU’s; commitments of
InstitutesInstitutesand CERN for computing [*]and CERN for computing [*]
In 2002: In 2002: Computing TDRs (experiments and CERN/IT)Computing TDRs (experiments and CERN/IT)
[*] Earlier IMoU’s to support Regional Center Proposals ? [*] Earlier IMoU’s to support Regional Center Proposals ?
November 15,1999: MONARC Plenary Meeting Harvey Newman (CIT)
LHC Computing: LHC Computing: IssuesIssues
Computing Architecture and Cost EvaluationComputing Architecture and Cost Evaluation Integration and “Total Cost of Ownership”Integration and “Total Cost of Ownership” Possible Role of Central I/O ServersPossible Role of Central I/O Servers
Manpower EstimatesManpower Estimates CERN versus scaled Regional Centre estimatesCERN versus scaled Regional Centre estimates Scope of services and support providedScope of services and support provided Dependence on Site Architecture and ComputingDependence on Site Architecture and Computing
Configuration Configuration