12
Title Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001

US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001

  • Upload
    ulric

  • View
    27

  • Download
    2

Embed Size (px)

DESCRIPTION

US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001. User Facility Hardware. Tier 1: CMSUN1 (User Federation Host) 8 400 MHz processors with ~ 1 TB RAID Wonder (User machine) 4 500 MHz CPU linux machine with ¼ TB RAID Production farm - PowerPoint PPT Presentation

Citation preview

Page 1: US-CMS  User Facilities Vivian O’Dell US CMS Physics Meeting  May 18,  2001

TitleTitle

US-CMS User Facilities

Vivian O’Dell

US CMS Physics Meeting May 18, 2001

Page 2: US-CMS  User Facilities Vivian O’Dell US CMS Physics Meeting  May 18,  2001

May 18, 2001 Vivian O’Dell, US CMS User Facilities Status 2

User Facility HardwareUser Facility HardwareTier 1:

CMSUN1 (User Federation Host) 8 400 MHz processors with ~ 1 TB RAID

Wonder (User machine) 4 500 MHz CPU linux machine with ¼ TB RAID

Production farm Gallo, Velveeta: 4 500 MHz CPU linux servers with ¼ TB

RAID each 40 dual CPU 750 MHz linux farm nodes

Page 3: US-CMS  User Facilities Vivian O’Dell US CMS Physics Meeting  May 18,  2001

May 18, 2001 Vivian O’Dell, US CMS User Facilities Status 3

Popcrn01 - popcrn40

GALLO,WONDER,VELVEETA, CMSUN1

WORKERS: SERVERS:

CMS ClusterCMS Cluster

Page 4: US-CMS  User Facilities Vivian O’Dell US CMS Physics Meeting  May 18,  2001

May 18, 2001 Vivian O’Dell, US CMS User Facilities Status 4

Prototype Tier 2 StatusPrototype Tier 2 Status

1. Caltech/UCSD Hardware at Each Site

20 dual 800MHz PIII’s, 0.5 GB RAM Dual 1GHz CPU Data Server, 2 GB RAM 2 X 0.5 TB fast (Winchester) RAID (70 MB/s sequential) CMS Software installed, ooHit and ooDigi tested.

• Plans to buy another 20 duals this year at each site.See http://pcbunn.cacr.caltech.edu/Tier2/Tier2_Overall_JJB.htm

2. University of Florida 72 computational nodes

Dual 1GHz PIII 512MB PC133 MHz SDRAM 76GB IBM IDE Disks

Sun Dual Fiber Channel RAID Array 660GB (raw) Connected to Sun Data Server Not yet delivered. Performance numbers to follow.

Page 5: US-CMS  User Facilities Vivian O’Dell US CMS Physics Meeting  May 18,  2001

May 18, 2001 Vivian O’Dell, US CMS User Facilities Status 5

Tier 2 Hardware Status (CalTech)Tier 2 Hardware Status (CalTech)

Page 6: US-CMS  User Facilities Vivian O’Dell US CMS Physics Meeting  May 18,  2001

May 18, 2001 Vivian O’Dell, US CMS User Facilities Status 6

UF Current (“Physics”) TasksUF Current (“Physics”) Tasks

Full digitization of JPG Fall Monte Carlo Sample..Fermilab, CalTech & UCSD are working on thisFermilab is hosting the User Federation (currently 1.7 TB)Full sample should be processed (pileup/nopileup in ~1-2

weeks(?)) Of course things are not optimally smooth

For up to date information see:http://computing.fnal.gov/cms/Monitor/cms_production.html

Full JPG sample will be hosted at FermilabUser Federation Support

Contents of the federation and how to access it is at the above url. We keep this up to date with production.

JPG NTUPLE Production at FermilabYujun Wu and Pal Hidas are generating the JPG NTUPLE

from the FNAL user federation. They are updating information linked to the JPG web page.

Page 7: US-CMS  User Facilities Vivian O’Dell US CMS Physics Meeting  May 18,  2001

May 18, 2001 Vivian O’Dell, US CMS User Facilities Status 7

Near Term PlansNear Term PlansContinue User Support

Hosting User Federations. Currently hosting JPG federation with combination of disk/tape (AMS server <-> Enstore connection working). Would like feedback.

Host MPG group User Federation at FNAL? Continue JPG ntuple production, hosting and archiving

Would welcome better technology here. Café starting to address this problem.

Code distribution support

Start Spring Production using more “grid aware” tools More efficient use of CPU at prototype T2’s

Continue commissioning 2nd prototype T2 centerStrategy for dealing with new Fermilab Computer Security

Means “kerberizing” all CMS computing Impact on users!

Organize another CMS software tutorial this summer(?) coinciding with kerberizing CMS machines Need to come up with a good time. Latter ½ of August before CHEP01?

Page 8: US-CMS  User Facilities Vivian O’Dell US CMS Physics Meeting  May 18,  2001

May 18, 2001 Vivian O’Dell, US CMS User Facilities Status 8

T1 Hardware StrategyT1 Hardware StrategyWhat we are doing

Digitization of JPG fall production with Tier 2 sitesNew MC (spring) production with Tier 2 sitesHosting JPG User Federation at FNAL

For fall production, this implies ~4 TB storage (e.g. ~1 TB on disk, 3 TB on tape).

Hosting MPG User Federation at FNAL? For fall production, this implies ~4 TB storage (~ 1 TB disk, 3 TB tape)

Also hosting User Federation from spring production, AOD or even NTUPLE for users

Objectivity testing/R&D in data hostingWhat we need

Efficient use of CPU at Tier 2 sites – so we don’t need additional CPU for production

Fast, efficient, transparent storage for hosting user federation Mixture of disk/tape

R&D for RAID/disk/OBJY efficient matching This will also serve as input to RC simulation

Build & operate R&D systems for analysis clusters

Page 9: US-CMS  User Facilities Vivian O’Dell US CMS Physics Meeting  May 18,  2001

May 18, 2001 Vivian O’Dell, US CMS User Facilities Status 9

Hardware Plans FY01Hardware Plans FY01

We have defined hardware strategy for T1 for FY2001.

~ Consistent with project plan and concurrence from ASCB.Start User Analysis Cluster at Tier 1. This will also be an

R&D cluster for “data intensive” computing.Upgrade networking for CMS clusterProduction User Federation Hosting for physics groups

(more disk/tape storage)Test and R&D systems to continue path towards full

prototype T1 center. We are focusing this year on data server R&D systems.

Have started writing requisitions. Plans to acquire most hardware over the next 2-3 months.

Page 10: US-CMS  User Facilities Vivian O’Dell US CMS Physics Meeting  May 18,  2001

May 18, 2001 Vivian O’Dell, US CMS User Facilities Status 10

FY01 Hardware Acquisition OverviewFY01 Hardware Acquisition Overview

Item Cost

Networking: 52,000GE for all data servers, FE for worker nodes,

Switch Chasis with 256 Gbps backplane

Test systems: 101,500Sun dataserver + RAID test system 48,500

Production test system (8 worker nodes cluster) 53,000

Analysis cluster (8 dual nodes + disk/data server) 53,000

Data Access and Distribution Servers 135,0002 linux data servers 32,000

4 TB RAID (various technologies) 103,000

Add'l Disk Data Storage for User Federation 45,000

Support (video/computing) 10,000

Total 396,500

Page 11: US-CMS  User Facilities Vivian O’Dell US CMS Physics Meeting  May 18,  2001

May 18, 2001 Vivian O’Dell, US CMS User Facilities Status 11

Funding Proposal for 2001Funding Proposal for 2001

WBS Item 2001

1.5.1 Onsite Networking 52,000

1.6.1 Distributed Data & Computing testbeds 48,500

1.6.2.2.1 Data Intensive Computing 53,000

1.6.2.2.3 Data Access and Distribution Servers (6 TB) 188,000

1.7.2 Data Storage 45,000

1.7.4 Data Import/Export 0

1.8.1 Desktop Support 10,000

Total 396,500

Some costs may be overestimated – (but) also we may need to augment our farm CPU

Page 12: US-CMS  User Facilities Vivian O’Dell US CMS Physics Meeting  May 18,  2001

May 18, 2001 Vivian O’Dell, US CMS User Facilities Status 12

SummarySummary

User facility has a dual missionSupporting Users

Mostly successful (I think) Open to comments/critiques and requests!

Hardware/Software R&D We will be concentrating on this more over the next year This will be done in tandem with T2 centers and international

CMS

We have developed a hardware strategy taking these two missions into account

We now have 2 prototype Tier 2 centers.CalTech/UCSD have come onlineUniversity of Florida is in the process of

installing/commissioning hardware