10
7/22/99 J. Shank US ATLAS Meeting BNL 1 Tier 2 Regional Centers Goals • Short-Term: Code development centers Simulation centers Data repository • Medium-term Mock Data Challenge (MDC) • Long-term Data analysis and calibration • Education Contact point between ATLAS, students, post- docs

7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term

Embed Size (px)

Citation preview

Page 1: 7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term

7/22/99 J. Shank US ATLAS Meeting BNL 1

Tier 2 Regional Centers

Goals• Short-Term:

• Code development centers

• Simulation centers

• Data repository

• Medium-term • Mock Data Challenge (MDC)

• Long-term • Data analysis and calibration

• Education• Contact point between ATLAS, students, post-docs

Page 2: 7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term

7/22/99 J. Shank US ATLAS Meeting BNL 2

Tier 2 Definition

What is a Tier 2 Center?• assert( sizeof(Tier 2) < 0.25* sizeof (Tier 1) );

• What is the economy of scale?• Too few FTE’s: better off consolidating at Tier 1.

• Too many: the above assert fails and admin. overhead grows.

• Detector Sub-system specific?• Detector Calibration center, e.g.

• Task specific?• DB development center, e.g.

• Find-the-Higgs center.

• Purely Regional?• Support all computing activities in the region.

Page 3: 7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term

7/22/99 J. Shank US ATLAS Meeting BNL 3

Example 1:Boston Tier 2 Center

Focus on Muon Detector Subsystem

• Calibrate the Muon system

• How much data?

– Special cal. Runs of real data, just muon ~10% event

– Overall ~ 1% of data or 10 Tb/yr

• How much CPU?

– 100 sec/ev => 30 CPU’s

Page 4: 7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term

7/22/99 J. Shank US ATLAS Meeting BNL 4

Example 2: Physics Analysis Center

Find the Higgs

• Get 10% of the data to refine algorithms.• How much data?

– 10 Tb/yr (reconstructed data from CERN).

• CPU:

– 103 sec/ev/CPU => 300 CPUs.

– We better do better than 103 sec/ev/CPU!

• Distribute full production analysis to Tier 1 + other Tier 2 centers.

Page 5: 7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term

7/22/99 J. Shank US ATLAS Meeting BNL 5

Example 3: Missing Et

From last US ATLAS computing videoconference:

(see J. Huth’s slides on usatlas web page)• 40 M events (2% of triggers)

• 40 TB of data

• Would use 10% of a 26000 SpecINT95 tier 2 center

Conclusions:• Needs lots of data storage

• CPU requirements modest

Page 6: 7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term

7/22/99 J. Shank US ATLAS Meeting BNL 6

Network Connectivity

MIT

13.8 Mbps

Wayne State

UMass

ChicagoNew York City

Drexel

Columbia

NYU

Cleveland

SREN

APAN70 Mbps

CA*Net II

DREN

iDREN

FNAL ANL

SprintNY NAP

Penn StateUIUC

Brown

Harvard

ChicagoUIC

Northwestern

Ohio State

NCSA

CMU

Rutgers

Cornell

Princeton

Indiana

Michigan

Notre Dame

Michigan State

MeritRochester

NYSERNETSyracuse

Rensselaer

SUNY Buffalo

UNHDartmouth

TANet

15 Mbps

WVU

Boston

Yale

UMaine

PSC

MRENSTARTAPNGIX-C

MirNET6 Mbps

Boston U

Tufts

MCI - vBNS POP

vBNS Approved Institution

Planned vBNS Approved Institution

vBNS Partner Institution

Network of vBNSPartner Institutions

Planned Network of vBNS Partner Institutions

Aggregation Point

Planned Aggregation Point

DS3

OC3

OC12

OC48

vBNS Monthly feesDS0 $100 T1 $800 DS3 $7,200 OC-3 $21,600 OC12 $64,800

Page 7: 7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term

7/22/99 J. Shank US ATLAS Meeting BNL 7

Working Definition of Tier 2 Center

Hardware:• CPU: 50 boxes

• Each 200 SpecInt95 104 SpecInt95

• Storage: 15 Tb• Low maintenance robot system

People:• Post-Docs: 2

• Computer Professionals• Designers: 1

• Facilities Manager: 2

– Need Sys. Admin. type + lower level scripting support type

– Could be shared

Infrastructure:• Network Connectivity must be state of the art (OC12 OC192?)

• Cost Sharing, integration w/existing facility.

Page 8: 7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term

7/22/99 J. Shank US ATLAS Meeting BNL 8

Mass Store Throughput

Do we need HPSS?• 1 GB/s throughput

• High maintenance cost(at least now)

DVD jukeboxes• 600 DVD, 3 TB of storage

• 10-40 MB/s throughput

• $45k

IBM Tape Robot• 7+ TB storage with 4 drives

• 10-40 MB/s throughput

• Low maintenance IBM ADSM software

Can we expect cheap, low maintenance 100MB/s in 2005?

Page 9: 7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term

7/22/99 J. Shank US ATLAS Meeting BNL 9

Cost

People• 3 FTE 525k 2400k

• Post Docs ??

Hardware• 50 boxes x 5k 250k

• Mass Storage tape robot 250k

• Disk ($100/Gb scaled) 100k

Software• Licenses 10k 50k

Funding Profile

0

20

40

60

80

100

120

1 2 3 4 5

People

Hardware

Total: 3.0M

Yearly 5 yr

Page 10: 7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term

7/22/99 J. Shank US ATLAS Meeting BNL 10

Summary

Schedule• how many tier 2’s?

• Where/when?• Spread geographically, sub-system oriented?

• Need them to be relevant to code development => start as many as possible now.

– Need presence at CERN now.