11
Supermicro © 2009 Confidential HPC Case Study & References

Supermicro © 2009Confidential HPC Case Study & References

Embed Size (px)

Citation preview

Page 1: Supermicro © 2009Confidential HPC Case Study & References

Supermicro © 2009 Confidential

HPC Case Study & References

Page 2: Supermicro © 2009Confidential HPC Case Study & References

Supermicro © 2009 Confidential

Super Micro Computer, Inc. (NASDAQ: SMCI), a leader in

application-optimized, high performance server solutions,

participated in the inaugural ceremony for the CERNs LHC

(Large Hadron Collider) Project in Geneva. Supermicro’s

SuperBlade® servers, housed at CERN, (one of the world’s

largest research labs) enabled the LHC Project with superior

computational performance, scalability, and energy efficiency.

“We are honored to have Supermicro’s industry-leading blade

server technology installed at the foundation of this monumental

scientific research project,” said Charles Liang, CEO and president

of Supermicro. “Our SuperBlade® platforms deliver unsurpassed

performance, computing density and energy efficiency, making

them ideal for HPC clusters and data centers.”

The LHC Project deploys , amongst others, Supermicro’s

award-winning SuperBlade servers. These optimized solutions

empower Supermicro customers with the most advanced green

server technology available, including 93%* peak power supply

efficiency, innovative and highly efficient thermal and cooling

system designs, and industry-leading performance-per-watt

(290+ GFLOPS/kW*).

Installation Example – CERN

100+ nodes SuperBlades®along with 2000+ rack mount servers

14-Blade 10-Blade

Page 3: Supermicro © 2009Confidential HPC Case Study & References

Supermicro © 2009 Confidential

CERN LHC (Large Hadron Collider) Project

CERN: Largest Hadron Collider Research Center

14-Blade 10-Blade

Source: Dr. Helge Meinhard, CERN

Page 4: Supermicro © 2009Confidential HPC Case Study & References

Supermicro © 2009 Confidential

Source: Dr. Helge Meinhard, CERN

Tunnel of 27 km circumference, 4 m diameter, 50…150 m below ground; Detectors at four collision points

15 Petabytes per year for four experiments

CERN LHC (Large Hadron Collider) Project

Page 5: Supermicro © 2009Confidential HPC Case Study & References

Supermicro © 2009 Confidential

Installation Example – Research LABs

Total 4000 nodes / 4-wayAMD Barcelona quad-coreWith Infiniband connection

300 nodes / 2 DP nodes per 1U1U Twin™ / Intel HapertownWith Infiniband (onboard) connection

Shanghai-ready

Page 6: Supermicro © 2009Confidential HPC Case Study & References

Supermicro © 2009 Confidential

LLNL Hyperion Petescale Cluster

Page 7: Supermicro © 2009Confidential HPC Case Study & References

Supermicro © 2009 Confidential

Supermicro A+ Server 2041M-32R+B

H8QM3-2 motherboard 2U 4-socket AMD quad-core

One with 64 GB RAM & one with 256 GB RAM

Two x16, Two x4 PCIe 1.0 slots

One Mellanox Connect-X IB DDR 2-port PCIe CX4 HCA

One LSI SAS controller 8 external ports

LLNL Hyperion Compute Node

Page 8: Supermicro © 2009Confidential HPC Case Study & References

Supermicro © 2009 Confidential

Speeding up science: CSIRO's CPU-GPU

CSIRO GPU supercomputer configuration: The new CSIRO

high performance computing cluster will deliver up to 200

plus Teraflops of computing performance and will consist of

the following components: Supermicro 100 Dual Xeon E5462 Compute Nodes (i.e. a total of 800

2.8GHz compute cores) with 16GB of RAM, 500GB SATA storage and

DDR InfiniBand interconnect

50 Tesla S1070 (200 GPUs with a total of 48,000 streaming processor

cores)

96 port DDR InfiniBand Switch

On TOP500 / Green500 (June, 2009): Delivered by NEC (Hybrid Cluster): Supermicro Twin + NVIDIA GPUs

(50TFLOPS)

TOP500: ranking 77

Green500: ranking 20: the most efficient cluster system in

the X86 space

Page 9: Supermicro © 2009Confidential HPC Case Study & References

Supermicro © 2009 Confidential

Oil & Gas Application

Oil & Gas Exploration - Seismic Data Analysis Supermicro 2U Twin² System – 512 nodes

Page 10: Supermicro © 2009Confidential HPC Case Study & References

Supermicro © 2009 Confidential

HPC Implementation with PRACE

Challenge

Customer: Swedish National Infrastructure for Computing (SNIC), Royal Institute of Technology (KTH), Sweden, jointly with the Partnership for Advanced Computing in Europe (PRACE)

Need: A general purpose HPC cluster for assessment of energy efficiency and compute density achievable using standard components

Buying Criteria: Superior performance/watt/sq.ft. Ability to control energy consumption based on workload End-to-End Non-blocking, high throughput IO connectivity Latest server chipset supporting PCI-E Gen2 Non-proprietary x86 architecture

Solution

Collaborative effort between Supermicro and AMD

Infrastructure (SuperBlade®): 18 7U SuperBlade® enclosures 10 4-way blades with 240 processor cores per 7U QDR InfiniBand switch (40Gb/s quad data rate) High efficiency, N+1 redundant power supplies (93%

efficiency) Blade enclosure management solution including KVM/IP on

each node

Processor: Six-Core AMD Opteron™ HE processor

Chipset: AMD SR5670 chipset supporting HyperTransport™ 3

interface, PCI-E Gen2 IO connectivity, APML power

management

Computing Capacity:

180 4-way systems with total 720 processors – 4320 cores

Page 11: Supermicro © 2009Confidential HPC Case Study & References

Supermicro © 2009 Confidential

References

LLNL: Lawrence Livermore National Laboratory Dr. Mark Seager Director of the Advanced Simulation and Computing Program V-925-423-3141 P-800-265-8691  POBOX 808, L-554, East Ave., Livermore, CA 94551 [email protected]

CERN: European Organization for Nuclear Research Dr. Helge Meinhard Server & Storage IT in charge 1211 Geneva 23, Switzerland Tel. + 41 22 767 60 31 [email protected]

PRACE Prof. Lennart Johnsson PDC Director, Teknikringen 14, Royal Institute of Technology SE-100 64 Stockholm, Sweden [email protected]