Upload
dina-norris
View
219
Download
3
Embed Size (px)
Citation preview
The following is a collection of slides from a few recent talkson computing for ATLAS in Canada, plus a few new ones.
I might refer to all of them, I might not depending on time andthe scope Les wants covered.
M.C. Vetterli; SFU/TRIUMF
The Canadian ModelThe Canadian Model
Establish a large computing centre at TRIUMF that willbe on the LCG and will participate in the common tasksassociated with Tier-1 and Tier-2 centres.
Canadian groups will use existing CFI facilities (or what they will become) to do physics analysis. They will access data and the LCG through TRIUMF.
The jobs are smaller at this level and can be more easily integrated into shared facilities. We can also be independent of LCG middleware.
In this model, the TRIUMF centre acts as the hub of theCanadian computing network, and as an LCG node
TRIUMFGateway
cpu/storage Experts
- MC data- ESD’- calibration- access to CDN Grid
- algorithms- calibration- MC production
- access to ATLAS Grid- AOD- DPD- technical expertise
The ATLAS-Canada Computing ModelThe ATLAS-Canada Computing Model
CA*Net4
USA, Germany, France, UK, Italy, …
CERN
ATLAS Grid
Canadian Grid
- ESD- access to RAW & ESD
UVic, SFU, UofA, UofT, Carleton, UdeM (CFI funded)
M.C. Vetterli; SFU/TRIUMF
What Will We Need at TRIUMF?What Will We Need at TRIUMF?
Total computing power needed: 1.8 MSI2k 250 dual 10GHz 5000 x 1GHz CPUs Total storage required: 340 TB of disk 1.2 PB of tape We assume that the network will be 10 GbitE for both the LAN and WAN These numbers have been supported by an expert advisory committee
M.C. Vetterli; SFU/TRIUMF
Acquisition ProfileAcquisition Profile
Year 2005-06 2006-07 2007-08 2008-09 2009-10
Fraction 10% 33% 67% 100% 133%
Disk (TB) 34 113 226 340 453
Tape (TB) 120 400 800 1200 1600
CPU(MSI2k)
0.18 0.6 1.2 1.8 2.4
M.C. Vetterli; SFU/TRIUMF
The TRIUMF Centre - IIThe TRIUMF Centre - II
8 NEW people to run the center are included in the budget.
4 for system support; 4 for software/user support.
Also one dedicated technician.
Personnel in the university centers will be mostly for system support.
More software support will be available from ATLAS postdocs.
M.C. Vetterli; SFU/TRIUMF
Status of FundingStatus of Funding
The TRIUMF center will be funded through the next TRIUMF 5-year plan; starts Apr.1, 2005.
Decision on this is expected around the end of this year.
University centers are funded through the Canada Foundation for Innovation and the provincial governments. These centers exist and should continue to be funded. Shared facilities.
Ask CFI for funds for a second large center? Driven by new requirements for T1 centers. Just started discussing this.
M.C. Vetterli; SFU/TRIUMF
The TRIUMF Prototype The TRIUMF Prototype CentreCentre
Hardware: - 5 dual 2.8 GHz Xeon nodes - 6 white boxes (2 CE, LCfGng, UI, LCG-GIIS, spare) - 1 SE (770 Gbytes usable disk space)
Functionality: - LCG core node (CE #1) - Gateway to Grid-Canada & WestGrid (CE #2) - Canadian regional centre: + coordinates & pre-certifies Canadian LCG
centres + primary contact with LCG
Middleware: - Grid inter-operability: + integrate non-LCG sites;
there is a lot of interest in this (UK, US)Rod Walker (SFU research associate) as been invaluable!
M.C. Vetterli; SFU/TRIUMF
The Other Canadian The Other Canadian SitesSites
Victoria: - Grid-Canada Production Grid (PG-1) - Grid inter-operability (Dan Vanderster et al)
SFU/WestGrid: - Non-LCG test site (incorporate into LCG through TRIUMF)
Alberta: - Grid-Canada Production Grid (PG-1)- LCG node- Coordination of DC2 for Canada (Bryan Caron)
Toronto: - LCG node- ATLAS software mirror
Montreal: - LCG node
Carleton: - LCG node
M.C. Vetterli; SFU/TRIUMF
Canadian DC2 Computing Canadian DC2 Computing ResourcesResources
Note: 1 kSI2k 2.8 GHz Xeon
Canadian Computing Resources for LCG and ATLAS DC2
Site Name TRIUMF Toronto Alberta Victoria WestGrid Montreal Carleton TOTALService Manager Rod WalkerGreg Wu Bryan CaronRandy SobieMike VetterliWen Chao ChenBill JackCenter Type Tier_2 Tier_2 Tier_2 Tier_3 Tier_3 Tier_2 Tier_2Year/Quarter 2004-Q2 2004-Q2 2004-Q2 2004-Q2 2004-Q2 2004-Q2 2004-Q2CPU Power (kSI2k) 10.3 177 60 50 55/110 8.15 13.5 374/419Disk Space (Tbytes) 0.8 8 9 1 1 0.4 3 23.2Tape Capacity (Tbytes) 10-20 0 10 10 10-20 0 0 40-60share_alice 25% 0% 25% 0% 0% 0% 25%share_atlas 25% 80% 25% 100% 100% 100% 25%share_cms 25% 20% 25% 0% 0% 0% 25%share_lhcb 25% 0% 25% 0% 0% 0% 25%
400 x 2.8 GHz CPUs 23 TBytes of disk 50 TBytes of tape
M.C. Vetterli; SFU/TRIUMF
Federated Grids for ATLAS DC2Federated Grids for ATLAS DC2
Grid-Canada PG-1
WestGrid
LCG/WestGrid
SFU/TRIUMF
LCG/Grid-Can
LCG
In addition to LCG resources in Canada
M.C. Vetterli; SFU/TRIUMF
Linking HEPGrid to LCGLinking HEPGrid to LCG
.....GCRes.1
GCRes.n
Grid-Cannegotiator/scheduler
WGUBC/TRIUMF
TRIUMF
TRIUMFcpu &storage
negotiator/scheduler
RB/scheduler
LCG BDII/RB/ scheduler
Class ad
1) Each GC resource publishes a class ad to the GC collector
2) The GC CE aggregates this info and publishes it to TRIUMF as a single resource
Class ad
3) The same is done for WG
3) The CondorG job manager at TRIUMF builds a submission script for the TRIUMF Grid4) The TRIUMF negotiator matches the job to GC or WG
1) The LCG RB decides where to send the job (GC/WG or the TRIUMF farm)Job class ad2) Job goes to the TRIUMF farm or
TRIUMF decides to sendthe job to WG
5) The job is submitted to the proper resource
TRIUMF decides to sendthe job to GC
6) The process is repeated on GC if necessary
MDS
4) TRIUMF aggregates GC & WG and publishes this to LCG as one resource5) TRIUMF also publishes its own resources separately