10
Presented by CCS Network Roadmap Steven M. Carter Network Task Lead Center for Computational Sciences

Presented by CCS Network Roadmap Steven M. Carter Network Task Lead Center for Computational Sciences

Embed Size (px)

Citation preview

Presented by

CCS Network Roadmap

Steven M. CarterNetwork Task Lead

Center for Computational Sciences

2 Carter_LCF Networking_0611

CCS network roadmap

FY 2006–FY 2007 Scaled Ethernet to meet wide-area needs and current day

local-area data movement Developed wire-speed, low-latency perimeter security to fully

utilize 10 G production and research WAN connections Began IB LAN deployment

FY 2007–FY 2008 Continue building out IB LAN infrastructure to satisfy the file

system needs of the Baker system Test Lustre on IB/WAN for possible deployment

Summary: Hybrid Ethernet/Infiniband (IB) network to provide both high-speed wide-area connectivity and uber-speed local-area data movement

3 Carter_LCF Networking_0611

Ethernet[O(10GB/s)]

Ethernet core scaled to match wide-area

connectivity and archive

Infiniband core scaled to match central file system

and data transfer

NCCS network roadmap summary

Viz

High-PerformanceStorage System

(HPSS)

Jaguar

Lustre

Baker

Infiniband[O(100GB/s)]

Gateway

4 Carter_LCF Networking_0611

Juniper T640

Ciena CoreStream

Juniper T320

Force10 E600

Cisco 6509NSTG

TeraGrid

CCS

DOE UltraScience/NSF CHEETAH

CCS

OC-192 to TeraGrid

2 x OC-192 to USN

OC-192 to CHEETAH

CCS WAN overview

OC-192 to Internet 2

OC-192 to Esnet

Internal 10G Link

Internal 1G Link

Ciena CoreDirector

5 Carter_LCF Networking_0611

4

4

4

20

20

48

4848

48

4848

48

48

48

48

128 DDR

Infiniband

64 DDR

64 DDR

24 SDR

87 SDR

3 SDR

64 DDR

20 SDR

32 DDR

32

Ethernet

48-96 SDR

Jaguar

Spider60

Viz

HPSS

Devel

E2E

Spider10

CCS Network 2007

6 Carter_LCF Networking_0611

50 DDR

24 SDR20 SDR

300 DDR(50 DDR/link)

64 DDR

32 DDR

87 SDRJaguar

Viz

HPSS

E2E

DevelSpider10

Spider240Baker

50 DDR

48-96 SDR(16-32 SDR/link)

CCS IB network 2008

7 Carter_LCF Networking_0611

FY 2007 milestones

Nov 2006 - Completed IB WAN testing

Jan 2007 - Completed 10-G WAN upgrade

Jan 2007 - IBoIP firewall installation

Mar 2007 - Force10 P10 upgrade

Apr 2007 - Completed VL/QOS testing

Jun 2007 - IBoIP/SDP decision point

8 Carter_LCF Networking_0611

IB over WAN testing

Placed 2 x Obsidiam Longbow devices between Voltaire 9024 and Voltaire 9288

Provisioned loopback circuits of various lengths on the DOE UltraScience Network and ran test.

RDMA Test Results:Local: 7.5Gbps (Longbow to Longbow)ORNL <-> ORNL (0.2mile): 7.5GbpsORNL <-> Chicago (1400miles):

7.46GbpsORNL <-> Seattle (6600 miles):

7.23GbpsORNL <-> Sunnyvale (8600 miles):

7.2Gbps

20 81

ObsidianLongbow

CienaCD-CI(SNV)

4x Infiniband SDR

OC-192 SONET

Lustre End-to-End

CienaCD-CI

(ORNL)

ObsidianLongbow

Voltaire9024

Voltaire9288

DOE UltraScience

Network

9 Carter_LCF Networking_0611

Spider(Linux Cluster)

Rizzo (XT3)

Voltaire 9024

Utilizing a server on spider (a commodity x86_64 Linux cluster), we were able to show the first successful infiniband implementation on the XT3.

The basic RDMA test (as provided by the Open Fabrics Enterprise Distribution), allows us to see the maximum bandwidth that we could achieve (about 900MB/s unidirectionally) from the XT3's 133MHz PCI-X bus.

Network Task Lead

10 Carter_LCF Networking_0611

Contact

Steven CarterTech IntegrationCenter for Computational Sciences(865) [email protected]

10 Presenter_date