20
SJ – Nov.2002 1 CERN’s openlab Project Sverre Jarp, Wolfgang von Rüden IT Division CERN 29 November 2002

CERN’s openlab Project

  • Upload
    yukio

  • View
    29

  • Download
    0

Embed Size (px)

DESCRIPTION

Sverre Jarp, Wolfgang von Rüden IT Division CERN 29 November 2002. CERN’s openlab Project. Our ties to IA-64 (IPF). A long history already…. Nov. 1992: Visit to HP Labs (Bill Worley): “ We shall soon launch PA-Wide Word!” - PowerPoint PPT Presentation

Citation preview

Page 1: CERN’s openlab Project

SJ – Nov.2002 1

CERN’s openlabProject

Sverre Jarp, Wolfgang von RüdenIT Division

CERN29 November 2002

Page 2: CERN’s openlab Project

SJ – Nov.2002 2

Our ties to IA-64 (IPF)

A long history already…. Nov. 1992: Visit to HP Labs (Bill Worley):

“We shall soon launch PA-Wide Word!” 1994-6: CERN becomes one of the few

external definition partners for IA-64 Now a joint effort between Intel and HP

1997-9: Creation of a vector math library for IA-64

Full prototype to demonstrate the precision, versatility, and unbeatable speed of execution (with HP Labs)

2000-1: Port of Linux onto IA-64 “Trillian” project: glibc Real applications

Demonstrated already at Intel’s “Exchange” exhibition on Oct. 2000

Page 3: CERN’s openlab Project

SJ – Nov.2002 3

openlab Status

Industrial Collaboration Enterasys, HP, and Intel are our partners Technology aimed at the LHC era

Network switch at 10 Gigabits Connect via both 1 Gbit and 10 Gbits

Rack-mounted HP servers Itanium processors Storage subsystem may be coming

from a 4th partner

Cluster evolution: 2002: Cluster of 32 systems (64 processors) 2003: 64 systems (“Madison” processors) 2004: 64 systems (“Montecito” processors)

Page 4: CERN’s openlab Project

SJ – Nov.2002 4

The compute nodes

HP rx2600 Rack-mounted (2U) systems Two Itanium-2 processors

900 or 1000 MHz Field upgradable to next generation

4 GB memory (max 12 GB) 3 hot pluggable SCSI discs (36 or 73 GB) On-board 100 and 1000 Mbit Ethernet 4 full-size 133 MHz/64-bit PCI-X slots Built-in management processor

Accessible via serial port or Ethernet interface

Page 5: CERN’s openlab Project

SJ – Nov.2002 5

openlab SW strategy

Exploit existing CERN infrastructure Which is based on

RedHat Linux, GNU compilers OpenAFS SUE (Standard Unix Env.) systems maintenance tools

Native 64-bit port Key LHC applications:

CLHEP, GEANT4, ROOT, etc. Important subsystems:

Castor, Oracle, MySQL, LSF, etc. Intel compiler where it is sensible

Performance

32-bit emulation mode Wherever it makes sense

Low usage, no special performance need Non-strategic areas

Page 6: CERN’s openlab Project

SJ – Nov.2002 6

openlab - phase 1

Also: Prepare porting strategy for phase 2

Estimated time scale:6 monthsAwaiting recruitment of:1 system programmer

Integrate the openCluster 32 nodes + development nodes

Rack-mounted DP Itanium-2 systems RedHat 7.3 (AW2.1 beta) – kernel at

2.4.19 OpenAFS 1.2.7, LSF 4 GNU, Intel Compilers (+ ORC?) Database software (MySQL, Oracle?) CERN middleware: Castor data mgmt GRID middleware: Globus, Condor, etc. CERN Applications

Porting, Benchmarking, Performance improvements

CLHEP, GEANT4, ROOT, CERNLIB

Cluster benchmarks 1 10 Gigabit interfaces

Page 7: CERN’s openlab Project

SJ – Nov.2002 7

openlab - phase 2 European Data Grid

Integrate OpenCluster alongside EDG testbed

Porting, Verification Relevant software packages

Large number of RPMs Document prerequisites Understand dependency chain Decide when to use 32-bit emulation mode

Interoperability with WP6 Integration into existing authentication scheme

Interoperability with other partners GRID benchmarks (As available)

Estimated time scale: 9 months(May be subject to change!)Awaiting recruitment of:1 GRID programmer

Also: Prepare porting strategy for phase 3

Page 8: CERN’s openlab Project

SJ – Nov.2002 8

openlab - phase 3

LHC Computing Grid Need to understand

Software architectural choices To be made between now and mid-2003

Need new integration process of selected software

Time scales

Disadvantage: Possible porting of new packages

Advantage: Aligned with key choices for LHC

deploymentImpossible at this stage to give firm estimates for timescale and required manpower

Page 9: CERN’s openlab Project

SJ – Nov.2002 9

openlab time line

End-02 End-03 End-04 End-05

Order/Install 32 nodes

Systems experts in place – Start phase 1

Complete phase 1

op

en

Clu

ste

r

Start phase 2

Order/Install Madison upgrades + 32 more nodes

ED

G

Complete phase 2

Order/Install Montecito upgrades

LC

GStart phase 3

Page 10: CERN’s openlab Project

SJ – Nov.2002 10

IA-64 wish list For IA-64 (IPF) to establish

itself solidly in the market-place: Better compiler technology

Offering better system performance

Wider range of systems and processors For instance: Really low-cost entry models, low

power systems

State-of-the-art process technology

Similar “commoditization” as for IA-32

Page 11: CERN’s openlab Project

SJ – Nov.2002 11

openlab starts with

CPU ServersMulti-gigabit

LAN

Page 12: CERN’s openlab Project

SJ – Nov.2002 12

… and will be extended …

CPU ServersMulti-gigabit

LAN

Gigabit long-haul link

WAN

RemoteFabric

Page 13: CERN’s openlab Project

SJ – Nov.2002 13

… step by step

Gigabit long-haul link

CPU Servers

WAN

Multi-gigabit LAN

Storagesystem

RemoteFabric

Page 14: CERN’s openlab Project

SJ – Nov.2002 14

Annexes The potential of openlab The openlab “advantage” The LHC Expected LHV needs The LHC Computing Grid Project –

LCG

Page 15: CERN’s openlab Project

SJ – Nov.2002 15

The openlab “advantage”

openlab will be able to build on the following strong points:

1) CERN/IT’s technical talent

2) CERN existing computing environment

3) The size and complexity of the LHC computing needs

4) CERN strong role in the development of GRID “middleware”

5) CERN’s ability to embrace emerging technologies

Page 16: CERN’s openlab Project

SJ – Nov.2002 16

The potential of openlab

Leverage CERN’s strengths

Integrates perfectly into our environment OS, Compilers, Middleware, Applications

Integration alongside EDG testbed

Integration into LCG deployment strategy

Show with success that the new technologies can be solid building blocks for the LHC computing environment

Page 17: CERN’s openlab Project

SJ – Nov.2002 17

The openlab “advantage”

openlab will be able to build on the following strong points:

1) CERN/IT’s technical talent

2) CERN existing computing environment

3) The size and complexity of the LHC computing needs

4) CERN strong role in the development of GRID “middleware”

5) CERN’s ability to embrace emerging technologies

Page 18: CERN’s openlab Project

SJ – Nov.2002 18

The Large Hadron Collider - 4 detectors

CMSATLAS

LHCb

Huge requirements for data analysis

Storage – Raw recording rate 0.1 – 1 GByte/sec

Accumulating data at 5-8 PetaBytes/year (plus copies)

10 PetaBytes of disk

Processing – 100,000 of today’s fastest PCs

Page 19: CERN’s openlab Project

SJ – Nov.2002 19

Expected LHC needs

Estimated DISK Capacity at CERN

0

1000

2000

3000

4000

5000

6000

7000

1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010

year

Tera

Byt

es

Estimated Mass Storage at CERN

LHC

Other experiments

0

20

40

60

80

100

120

140

1998

1999

2000

2001

2002

2003

2004

2005

2006

2007

2008

2009

2010

Year

Pet

aByt

es

Estimated CPU Capacity at CERN

0

1,000

2,000

3,000

4,000

5,000

6,000

1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010

year

K S

I95 Moore’s law (based

on 2000)

Page 20: CERN’s openlab Project

SJ – Nov.2002 20

The LHC Computing Grid Project – LCG

1) Applications support: develop and support the common tools,

frameworks, and environment needed by the physics applications

2) Computing system: build and operate a global data analysis

environment integrating large local computing fabrics and high bandwidth networks to provide a service for ~6K researchers in over ~40 countries

Goal – Prepare and deploy the LHC computing environment

This is not “yet another grid technology

project” –

it is a grid deployment

project

LCG