Upload
jasmine-pearson
View
213
Download
1
Tags:
Embed Size (px)
Citation preview
IDC HPC User Forum ConferenceAppro Product Update
IDC HPC User Forum ConferenceAppro Product Update
Anthony Kenisky, VP of Sales
APPRO HPC User Forum Presentation – 09-15-10
• Innovative Technology
– Won SC Online 2009 Product Of the Year Award
• Price/Performance Leadership
– Consistently offers the best price/performance solution in the marketplace
• Brand & Reputation
– Impeccable reputation among both customers and competitors
• Flexibility
– Internal, engineering capabilities to cater solutions to specific customer’s problems
Company Introduction
APPRO HPC User Forum Presentation – 09-15-10
Innovative Solutions::Based on GPU’s & FLASH
• Hybrid solutions based on the latest CPU & GPU Technologies
• Flash solutions for I/O enhancements or Global memory
• GPU Blade & Rack mount solutions
• Hybrid solutions based on the latest CPU & GPU Technologies
• Flash solutions for I/O enhancements or Global memory
• GPU Blade & Rack mount solutions
APPRO HPC User Forum Presentation – 09-15-10
:: Recent Major Design Wins
Innovative Design Wins
DASH&
Trestles
DASH – A Winner of the SC09 Storage Challenge is available on the Terragrid network today. A 5.7TF cluster with SSD’s and 768GB of global shared memory space per node.
Trestles – will have 10,368 processor cores, a peak speed of 100 teraflop/s, and 38 terabytes of flash memory. Like Dash, Trestles will be available to users of the TeraGrid, the nation’s largest open-access scientific discovery infrastructure.
APPRO HPC User Forum Presentation – 09-15-10
:: Recent Major Design Wins
Data Intensive Design Wins
Next-generation Supercomputer to
solve “data-intensive” scientific
problems in 2011
Gordon – will feature 245TF of total compute power, 64TB of DRAM, 256 TB of flash memory, and four petabytes of disk storage.
A key feature of Gordon will be the availability of the supernode. Each of the systems supernodes (a group of 32 nodes) has the potential of 7.7 TF of compute power and 10 TB of memory.
Gordon will become a key part of a network of next-generation high-performance computers (HPC) being made available to users of the TeraGrid.
APPRO HPC User Forum Presentation – 09-15-10
:: Recent Major Design Wins
Data Intensive Design Wins
LLNL Testbed Cluster– Over 40,800,000 IOPS and 320GB/s aggregated bandwidth in two racks. Designed to enable computational scientists to have I/O test beds for scalable parallel file systems such as Lustre and CEPH. Secondly, it will allow the evaluation of large scale checkpoint restart mechanisms that don't depend on global scalable file systems. Thirdly, it will facilitate investigation of cloud base file systems and analysis tools such as Hadoop and MapReduce.
APPRO HPC User Forum Presentation – 09-15-10
:: GPU Solutions
Appro GPU Computing
TetrTetraa
Supports 1:1, 1:2, 1:4 CPU:GPU ratio combinations
APPRO HPC User Forum Presentation – 09-15-10
• Platform specifications
– Integrated 2P x86 host server + 4x “Fermi” GPUs – M2050 or M2070
– Host server agnostic – can support either Intel or AMD host boards
– Supports 6x hot-swappable 2.5” HDDs
– Support 1 additional PCIe expansion slot
– Intelligent power control – GPUs can be powered down independent of host to save overall system power
– Integrated IPMI 2.0 Remote Management
First 1U server to achieve over 1 TeraFLOP on Linpack
:: 2CPU & 4GPU in one Server
Appro 1U Tetra Solution
TetraTetra
APPRO HPC User Forum Presentation – 09-15-10
:: CPU/GPU Compute Blades
• Direct PCIe bus slot to slot connection
• No need for external PCIe cables
• Host/GPU Pair is a single GPU blade system
• Each GPU system is hot-swappable & easily serviceable
• All monitoring sensors and data are integrated between host and GPU
• Host and GPU module can be upgraded independently
Compute Host Blade GPU Expansion Blade
GPU Compute Blade
+
Appro GreenBlade System
APPRO HPC User Forum Presentation – 09-15-10
4x Racks of 4x Racks of
2P x86 Servers2P x86 Servers
TraditionalTraditional
2P Servers2P Servers
CombinationCombination
X86 host + GPUsX86 host + GPUs
Hybrid Computing
SAMESAMEFLOOR SPACEFLOOR SPACE
4x Racks of4x Racks of
Hybrid ServersHybrid Servers
SAMESAMEPOWER/COOLINGPOWER/COOLING
Up toUp to
8x8xMORE MORE
PERFORMANCEPERFORMANCE
:: Optimum Performance/Density
Do More with Less with ApproDo More with Less with Appro