Upload
moses-bowes
View
218
Download
4
Tags:
Embed Size (px)
Citation preview
1
Agenda …
• HPC Technology & Trends • HPC Platforms & Roadmaps• HP Supercomputing Vision• HP Today
2
HPC Trend:Faster processors• Processors inching ahead of each other• Itanium … Xeon … Opteron … Xeon …• Big leap happens this year:
17%
51%
50%
46%
24%
17%
10%
55%
19%
17%
16%
1%
2005 Opteron over Xeon 2006 Xeon over Opteron
CAD
Visual Studio
Fin Model
O&G
DCC
CAE
3
Industry Standard Processor choice & leadershipChoices at end of 2006
Woodcrest
Rev F Montecito• Price/performance
leadership with 32/64-bitco-existence
• Price/performance leadership with 32/64-bit co-existence
• Highest performance 64-bit processor core for sustained performance
• Dual-Core • 4 FLOPs/Tick• DDR2 FBD memory
• Dual-Core• 2 FLOPs/Tick• DDR2 memory
• Dual-Core• 4 FLOPs/Tick• DDR2 memory
• New higher performance chipsets
• 1GHz HyperTransport • New higher performance sx2000 and zx2 chipsets
• Highest clock speed, peak performance, large cache
• High bandwidth for sustained performance
• Highest SMP scalability (to 64p/128c)
• HP-UX for mission-critical technical computing
• Extensive 32-bit, and growing 64-bit ecosystems
• 2p/4c nodes for highly parallel Scale-out workloads
• Extensive 32-bit, and growing 64-bit ecosystems
• 2p/4c & 4p/8c nodes for moderate Scale-out workloads
• Extensive 64-bit ecosystem(and 32/64-bit on HP-UX)
• Scale-up and scale-out for complex workloads
4
Relative Performance of 17 ISV Applications
0.00
0.20
0.40
0.60
0.80
1.00
1.20
1.40
MSC.N
ASTRAN
ABAQUS STANDARD
GAUSSIAN
AMBER
DMO
L3
CASTEPCFX
PAM-C
RASH
RADIOSS
STAR-CD
ANSYS
SCHLUM
BERGER E
CLIPSE
ABAQUS EXPLI
CIT
BLAST
POWERFLO
W
FLUENT
LS-D
YNA
Pe
rf R
ela
tiv
e t
o f
as
tes
t It
an
ium
(b
igg
er
is b
ett
er)
rx1620/HP-UX/1.6GHz
rx1620/LINUX/1.6GHz
DL145/585/2.6GHz S-C
DL140/360/3.6GHz
Application performance is a qualitative number based on HP benchmarking results. Results are normalized to the faster Itanium operating environment and sorted by the Opteron:Itanium ratio. ISV compiler choices and optimization levels influence results as well as raw microprocessor capabilities.
Itanium, Opteron, Xeon comparative results 1HCY06
5 HP Confidential. Contains Intel Confidential Information
EDA - Other: simulation, verification, synthesis, physical design
Application performance is a qualitative number based on HP & Intel benchmarking results. Results are normalized to the faster Itanium operating environment and sorted by the Opteron:Itanium ratio. ISV compiler choices and optimization levels influence results as well as raw microprocessor capabilities.
Itanium, Opteron, Xeon comparative results 2HCY06
6
Broadest Suite of HPC platforms
HP Technical Clusters
BL2xp / BL3xpBL45p, BL60pBL460, BL680 Blade Clusters
HP Technical SMP Servers
DL360DL380DL385
DL140DL145
ProLiant F
amily
rx8620Superdome
rx4640rx2620
rx7620Integrity
Family
rx1620DL580DL585
HP Technical Workstations
xw8200 nw8240
xw9300
c8000
HP ClusterPlatform 4000
HP ClusterPlatform 3000
HP ClusterPlatform 6000
Version 2Version 2
7
HP Blades for HPC• Blades are the ideal platform
for clusters−Simplified management−Designed for performance and
scalability−Reduced interconnect and
network complexity−High density−Centralized power management
• Factors for blades adoption in HPC clusters:−Performance parity with racked
systems−Price advantage shifts to blades−Interconnect choice expands to
cover range of HPC workloads
Front
Rear
8
Agenda …
• Grid Initiative at HP • HPC Focus & Trends • HP Supercomputing Vision• HP Today
9
What is “Supercomputing Utility” Vision
• Develop and offer a open standards & open systems based Supercomputing Utility – that can expand/grow over time, and truly adapt to the changing enterprise and environment.
• The utility can deliver high computational throughput, support multiple applications with different characteristics and workload.
• The fabric of this utility is a high speed network – all linked to a large scale data store .
• The environment is managed and controlled as a single system, and provides support for dispersed work force – either with direct log in or grid accessible.
10
HP Vision for Supercomputing facility
Computation
DataManagementVisualization
Integration is the Key !Industry Standard
Servers
11
HP Unified Cluster Portfolio strategy
Advancing the power of clusters with• Integrated solutions spanning computation, storage and visualization• Choice of industry standard platforms, operating systems, interconnects,
etc• HP engineered and supported solutions that are easy to manage and use• Scalable application performance on complex workloads• Extensive use of open source software• Extensive portfolio of qualified development tools and applications
Scalable Visualization
ArrayHP Integrity & ProLiant Servers
HP Cluster Platforms
HP StorageWorks
Scalable File Share
Storage GridComputation
Visualization
DataManagement
12
HP XC software for LinuxLeveraged Open SourceFunction Technology Features and Benefits
Distribution and Kernel
RHEL 3.0 compatible
Red Hat compatible shipping product, Posix enhancements, support for Opteron, ISV support
Batch Scheduler LSF 6.0 Platform LSF HPC Premier scheduler, policy driven, allocation controls, MAUI support. Provides migration for AlphaserverSC customers
Resource Management
SLURM Simple Linux Utility for Resource Management Fault tolerant, highly scalable, uses standard kernel
MPI HP-MPI 2.1 HP’s Message Passing Interface Provides standard interface for multiple interconnects, MPICH compatible, support for MPI-2 functionality
Inbound Network / Cluster Alias
LVS Linux Virtual Server High availability virtual server project for managing incoming requests, with load balancing
System Files
Management
SystemImagerConfiguration toolsCluster database
SystemImager Automates Linux installs, software distribution, and production deployment. Supports complete, bootable image; can use multicast; used at PNNL and Sandia
Console Telnet based console commands
Power control Adaptable for HP integrated management processors – no need for terminal servers, reduced wiring
Monitoring Nagios
SuperMON
Nagios Browser based, robust host, service and network monitor from open source. SuperMon supports high speed, high sample rates, low perturbation monitoring for clusters.
High Perf I/O LustreTM 1.2.x LustreTM Parallel File System High performance parallel
file system – efficient, robust, scalable
13
High performance interconnects
• Infiniband− Emerging industry standard− IB 4x – speeds 1.8GB/s, <5μSec MPI latency− 24 port, 288 port switches− Scalable topologies with federation of switches
• Myrinet− Speeds up to 800MB/s, <6μSec MPI latency− 16 port, 128 port, 256 port switches− Scalable topologies with federation of switches
• Quadrics− Elan 4 – 800MB/s, <3μSec MPI latency− 8 port, 32 port, 64 port, 128 port switches− Scalable topologies with federation of switches
• GigE− 60-80MB/s, >40 μSec MPI latency
top-level switches
node-level switches (128 ports)
Connects to 64 nodes
top-level switches (288 ports)
node-level switches (24 ports)
Connects to 12 nodes
top-level switches (264 ports)
node-level switches (128 ports)
Connects to 64 nodes
PCI-e
14
HP Cluster Platforms
• Factory pre-assembled hardware solution with optional software installation− Includes nodes, interconnects, network, racks, etc.
integrated & tested• Configure to order from 5 node to 512 nodes (more
by request)− Uniform, worldwide specification and product menus− Fully integrated, with HP warranty and support
Compute Nodes
Operating Systems
Interconnects
HP Cluster
Platform 3000
ProLiant DL140 G2 ProLiant DL360 G4 server
Linux
Windows
GigE, IB, Myrinet
HP Cluster
Platform 4000
ProLiant DL145 G2
ProLiant DL585
Linux
Windows
GigE, IB, Myrinet, Quadrics
HP Cluster
Platform 6000
Integrity rx1620
Integrity rx2620
Linux
HP-UX
GigE, IB, Quadrics
15
Data ManagementHP StorageWorks Scalable File Share (HP SFS)
Customer challenge• I/O performance limitations
HP SFS provides• Scalable performance
− Aggregate parallel read or write bandwidth from > 1 GB/s to “tens of GB/s”
− 100-fold increase over NFS• Scalable access
− Shared, coherent, parallel access across a huge number of clients, 1000’s today “10’s of thousands” future
• Scalable capacity
− multiple terabytes to multiple petabytes • Based on breakthrough Lustre technology
− Open source, industry standards based
Scalable Storage Grid (Smart Cells)
ScalableBandwidth
Linux Cluster
HP Scalable File Share
16
Scalable Visualization
Customer challenge• Visualization solutions too
expensive, proprietary, not scalable
HP Scalable Visualization Array (SVA)
• Open, scalable, affordable, high-end, visualization solution based on industry standard Sepia technology
• Innovative approach combining
− standard graphics adapters
− accelerated compositing• Yields a system that scales to
clusters capable of displaying 100 million pixels or more
HP Scalable Visualization Array
17
App node
App node
App node
App node
App node
App node
App node
App node
App node
App node
Compute Nodes
users
Service Nodes
inb
ou
nd
con
necti
on
s
Services
Admin
Log-in
Log-in
Services
Services
Multi-PanelDisplay Device
Viz node
Viz node
Viz node
Viz node
Viz node
VIz node
Pixel Network
Visualization Nodes
xcxccompute compute clustercluster
Delivering the Vision
Object Storage Servers
HP SFSServers
Meta Data Servers
Scalable HA Storage Farm
Scalable HA Storage Farm
OST
OST
OST
MDS
MDSSVA SVA Rendering Rendering & & CompositinCompositingg
sfs / sfs / lustrelustreScalable File Scalable File
ShareShare
High Speed Interconnect
18
TIFR – Tata Institute of Fundamental ResearchComputational Mathematics Laboratory (CML)Industry: Scientific Research - Pune• Challenges
−Current AlphaServer based• Increase computational power
−Explosive grow in new research• Massive increase in performance
−Partnership for support services
• HP Solution−1 teraflop peak HP XC based on:
• CP6000 (77) 2CPU/4GB Integrity rx1620 1.6GHz compute nodes, Integrity rx2620 service node
• 288 Port Infiniband switch
−HP Math Libraries for Linux on Itanium−New CCN for collaboration on Algorithms
• Results−First step to massive Supercomputer−Improve ability to solve computationally
demanding algorithms
We need partners who complement our core competency in areas like complex hardware system design, microelectronics, nanotechnology and system software. This is where HP steps in, as it has been investigating HPC concepts for more than a decade and this has led to the creation of Itanium processors jointly with Intel.There is a need to build a giant hardware accelerator to address fundamental questions in computer science, which could not be answered until now, either by theory or experiment, to influence future development of the subject, facilitate scientific discoveries and solve grand challenges in various disciplines. This supercomputer, which will help us understand how to structure our algorithms for a larger system, is only a first step in that direction.
Professor Naren KarmarkarHead CML, TIFR(Dr Karmarkar is a Bell Labs Fellow)
19
TI – Texas InstrumentsIndustry: Semiconductor Engineering / EDA - Bangalore
• Challenges− 5,000 processors already installed, additional
Cluster computing required− Reduce design cycle time by 10X.− Datacenter now full will turn to industry for
Utility Computing
• HP Solution−5.6 teraflop peak Beowulf Clusters based
on:• Cluster Platform 4000
− 500 Compute nodes − ProLiant DL145 G2 2.8GHz 2P/2GB− GigaBit Ethernet Interconnect
−Support Services−Adding to 100/+ existing DL585 Servers
• Result−Additional 1,000 processor Cluster for
development requirements
www.ti.com/asia/docs/india/index.html
20
IGIB – Institute of Genomics & Integrative Biology Industry: BioTechnology / LMS - Delhi• Challenges
−Current AlphaServer based• Increase computational power
−Explosive grow in new research• Massive increase in performance
−Partnership for support−Improve cost efficiencies
• HP Solution−4½ teraflop peak HP XC based on:
• CP3000 (288) 2CPU/4GB ProLiant DL140 G2 3.6GHz nodes using Infiniband
• CP3000 (24) 2CPU/4GB ProLiant DL140 G2 test cluster
• Superdome, 12 TB StorageWorks EVA SAN
−Single point support service−IGIB research staff collaboration
• Results−HP India’s largest Supercomputer−One of the world’s most powerful research systems
dedicated to Life Sciences
HP’s Cluster Platform provides a scalable architecture that allows us to complete large, complex simulation experiments such as molecular interactions and dynamics, virtual drug screening, protein folding, etc much more quickly.This technology combined with HP’s experience and expertise in life sciences helps IGIB speed access to information, knowledge, and new levels of efficiency, which wehope will ultimately culminate in the discovery of new drug targets and predictive medicine for complex disorders with minimum side effects.
Dr. Samir BrahmachariDirector, IGIB