Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
TOP500 Supercomputers, June 2013
InfiniBand Strengthens Leadership as the Interconnect
Of Choice By Providing Best Return on Investment
© 2013 Mellanox Technologies 2 - Mellanox Confidential -
TOP500 Performance Trends
Explosive high-performance computing market growth
Clusters continue to dominate with 83% of the TOP500 list
82% CAGR 40% CAGR
Mellanox InfiniBand solutions provide the highest systems utilization in the TOP500 for both
high-performance computing and clouds
© 2013 Mellanox Technologies 3 - Mellanox Confidential -
InfiniBand is the de-facto Interconnect solution for High-Performance Computing
• Positioned to expand in Cloud and Web 2.0
InfiniBand is the most used interconnect on the TOP100, TOP200, TOP300 and TOP400 systems
FDR InfiniBand systems grew 3.3X year-over-year from 20 systems in June’12 to 67 in June’13
FDR InfiniBand Petascale capable systems grew 2X over 6 months from 4 in Nov’12 to 8 in Nov’13
InfiniBand enables the most efficient cloud – Microsoft Azure
FDR InfiniBand connects the four fastest InfiniBand systems on the list (#6, #9, #11, #14)
InfiniBand is the most used interconnect for Petascale capable systems with 16 systems out of 33
Mellanox InfiniBand is the only Petascale-proven InfiniBand solution
TOP500 Major Highlights
© 2013 Mellanox Technologies 4 - Mellanox Confidential -
InfiniBand in the TOP500
InfiniBand is the dominant interconnect technology for high-performance computing
• InfiniBand accelerates 205 systems, 41% of the list
• FDR InfiniBand connects the fastest InfiniBand systems – TACC (#6), LRZ (#9), TOTAL (#11) and AFRL (#14)
InfiniBand enables the most efficient cloud infrastructure – Microsoft Azure
• 90.2% cloud efficiency, 33% higher then other cloud solutions
InfiniBand connects the most powerful clusters
• 16 of the Petaflop capable systems (TOP33) - #6, #9, #11, #14, #15, #16, #17, #19, #20, #21, #25, #27, #28, #29, #31, #33
The most used interconnect solution in the TOP100, TOP200, TOP300, TOP400
• Connects 49% (49 systems) of the TOP100 while Ethernet only 6% (6 system)
• Connects 47.5% (95 systems) of the TOP200 while Ethernet only 21% (42 systems)
• Connects 44.3% (133 systems) of the TOP300 while Ethernet only 32.7% (98 systems)
• Connects 41% (173 systems) of the TOP400 while Ethernet only 37.8% (151 systems)
InfiniBand is the interconnect of choice for accelerator-based systems
• 80% of the accelerator-based systems are connected with InfiniBand
Diverse set of applications
• High-end HPC, commercial HPC, Cloud and enterprise data center
© 2013 Mellanox Technologies 5 - Mellanox Confidential -
Mellanox in the TOP500
Mellanox FDR InfiniBand is the fastest interconnect solution on the TOP500
• >12GB/s throughput, <0.7usec latency
• Being used in 67 systems on the TOP500 list – 3.3X increase from the June 2012 list
• Connects the fastest InfiniBand-based supercomputers – TACC (#6), LRZ (#9), TOTAL (#11) and AFRL (#14)
Mellanox InfiniBand is the fastest interconnect technology on the list
• Enables the highest system utilization on the TOP500 – nearly 96% system efficiency
• Enables the top highest utilized systems on the TOP500 list
Mellanox InfiniBand is the only Petascale-proven, standard interconnect solution
• Connects 16 out of the 33 Petaflop capable systems on the list
• Connects nearly 3X the number of Cray based systems in the Top100, more than 6.6X in TOP500
Mellanox’s end-to-end scalable solutions accelerate GPU-based systems
• GPUDirect technology enables faster communications and higher performance
© 2013 Mellanox Technologies 6 - Mellanox Confidential -
InfiniBand: The de-facto interconnect solution for performance demanding applications
TOP500 Interconnect Trends
© 2013 Mellanox Technologies 7 - Mellanox Confidential -
TOP500 Petascale Capable Machines
Mellanox InfiniBand is the interconnect of choice for Petascale computing
• Accelerating 48% of the sustained Petaflop systems (16 systems out of 33)
© 2013 Mellanox Technologies 8 - Mellanox Confidential -
TOP500 InfiniBand Accelerated Petascale Capable Machines
Mellanox FDR InfiniBand systems doubled from June’12 to June’13
• Accelerating 50% of the Petaflop capable systems (8 systems out of 16)
© 2013 Mellanox Technologies 9 - Mellanox Confidential -
Mellanox InfiniBand Paves the Road to Exascale
© 2013 Mellanox Technologies 10 - Mellanox Confidential -
Microsoft Windows Azure high-performance cloud computing
InfiniBand enables 90.2% cloud efficiency!
• 33% higher efficiency versus Amazon 10GbE Cloud
• 33% more compute per square foot
• Results in higher return on investment for both the cloud provider and the user
TOP500 #240
InfiniBand Enables Microsoft Windows Azure
© 2013 Mellanox Technologies 11 - Mellanox Confidential -
TOP100: Interconnect Trends
InfiniBand is the leading interconnect solution in the TOP100
The natural choice for world-leading supercomputers • Performance, Efficiency, Scalability
© 2013 Mellanox Technologies 12 - Mellanox Confidential -
InfiniBand connects the majority of the TOP100, 200, 300, 400 supercomputers
Due to superior performance, scalability, efficiency and return-on-investment
InfiniBand versus Ethernet – TOP100, 200, 300, 400, 500
© 2013 Mellanox Technologies 13 - Mellanox Confidential -
TOP500 Interconnect Placement
InfiniBand is the high performance interconnect of choice • Connects the most powerful clusters, and provides the highest system utilization
InfiniBand is the best price/performance connectivity for HPC systems
© 2013 Mellanox Technologies 14 - Mellanox Confidential -
InfiniBand’s Unsurpassed System Efficiency
TOP500 systems listed according to their efficiency
InfiniBand is the key element responsible for the highest system efficiency
Mellanox delivers efficiencies of up to 96% with InfiniBand
© 2013 Mellanox Technologies 15 - Mellanox Confidential -
TOP500 Interconnect Comparison
InfiniBand systems account for 2.6X the performance of Ethernet systems
The only scalable HPC interconnect solutions
© 2013 Mellanox Technologies 16 - Mellanox Confidential -
InfiniBand connected systems’ performance demonstrate highest growth rate
InfiniBand responsible for 2X the performance versus Ethernet on the TOP500 list
TOP500 Performance Trend
© 2013 Mellanox Technologies 17 - Mellanox Confidential -
InfiniBand Performance Trends
InfiniBand-connected CPUs grew 31% from June‘12 to June‘13
InfiniBand-based system performance grew 60% from June‘12 to June‘13
Mellanox InfiniBand is the most efficient and scalable interconnect solution
Driving factors: performance, efficiency, scalability, many-many cores
© 2013 Mellanox Technologies 18 - Mellanox Confidential -
Providing End-to-End Interconnect Solutions
Mellanox InfiniBand connects the most efficient system on the list
• Nearly 96% efficiency/utilization, only 4% less than the theoretical limit
ICs Switches/Gateways Adapter Cards Cables/
Interconnects
Comprehensive End-to-End InfiniBand and Ethernet Solutions Portfolio
Long-Haul Systems
MXM Mellanox Messaging
Acceleration
FCA Fabric Collectives
Acceleration
Management
UFM Unified Fabric Management
Data
RDMA Remote Direct Memory
Access
UDA Unstructured Data
Accelerator
Comprehensive End-to-End Software Accelerators and Managment
Messaging
Comprehensive End-to-End InfiniBand and Ethernet Solutions Portfolio
© 2013 Mellanox Technologies 19 - Mellanox Confidential -
FDR InfiniBand Performance Advantages
© 2013 Mellanox Technologies 20 - Mellanox Confidential -
2008
QDR InfiniBand End-to-End GPUDirect Technology Released MPI/SHMEM Collectives Offloads (FCA), Scalable HPC (MXM), Open
SHMEM, PGAS/UPC
Long-Haul Solutions
Mellanox Interconnect Development Timeline
Connect-IB - 100Gb/s HCA Dynamically Connected
Transport
2009 2010 2011 2012 2013
FDR InfiniBand End-to-End
GPUDirect RDMA CORE-Direct Technology InfiniBand – Ethernet Bridging World’s First Petaflop Systems
Technology and Solutions Leadership, EDR InfiniBand Expected in 2014-2015
© 2013 Mellanox Technologies 21 - Mellanox Confidential -
“Stampede” system
6,000+ nodes (Dell), 462462 cores, Intel Phi co-processors
2.66 Petaflops
Mellanox end-to-end FDR InfiniBand
Texas Advanced Computing Center/Univ. of Texas - #6
Petaflop Mellanox Connected
© 2013 Mellanox Technologies 22 - Mellanox Confidential -
IBM iDataPlex and Intel Sandy Bridge
147456 cores
Mellanox end-to-end FDR InfiniBand solutions
2.9 sustained Petaflop performance
The fastest supercomputer in Europe
Leibniz Rechenzentrum SuperMUC - #9
Petaflop Mellanox Connected
© 2013 Mellanox Technologies 23 - Mellanox Confidential -
“Pangea” system
SGI Altix X system, 110400 cores
Mellanox FDR InfiniBand
2.9 sustained Petaflop performance
Total Exploration Production - #11
Petaflop Mellanox Connected
© 2013 Mellanox Technologies 24 - Mellanox Confidential -
“Spirit” system
SGI Altix X system, 74584 cores
Mellanox FDR InfiniBand
2.1 sustained Petaflop performance
Air Force Research Laboratory - #14
Petaflop Mellanox Connected
© 2013 Mellanox Technologies 25 - Mellanox Confidential -
Bull Bullx B510, Intel Sandy Bridge
77184 cores
Mellanox end-to-end FDR InfiniBand solutions
1.36 sustained Petaflop performance
CEA/TGCC-GENCI - #15
Petaflop Mellanox Connected
© 2013 Mellanox Technologies 26 - Mellanox Confidential -
Dawning TC3600 Blade Supercomputer
5200 nodes, 120640 cores, NVIDIA GPUs
Mellanox end-to-end 40Gb/s InfiniBand solutions
• ConnectX-2 and IS5000 switches
1.27 sustained Petaflop performance
The first Petaflop systems in China
National Supercomputing Centre in Shenzhen (NSCS) - #16
Petaflop Mellanox Connected
© 2013 Mellanox Technologies 27 - Mellanox Confidential -
“Yellowstone” system
72,288 processor cores, 4,518 nodes (IBM)
Mellanox end-to-end FDR InfiniBand, full fat tree, single plane
NCAR (National Center for Atmospheric Research) - #17
Petaflop Mellanox Connected
© 2013 Mellanox Technologies 28 - Mellanox Confidential -
SGI Altix ICE X/8200EX/8400EX
Around 12K nodes, 125980 cores
1.24 Petaflops sustained performance
Mellanox end-to-end FDR and QDR InfiniBand
Supports variety of scientific and engineering projects • Coupled atmosphere-ocean models
• Future space vehicle design
• Large-scale dark matter halos and galaxy evolution
NASA Ames Research Center Pleiades - #19
Petaflop Mellanox Connected
© 2013 Mellanox Technologies 29 - Mellanox Confidential -
Bull Bullx B510, Intel Sandy Bridge
70560 cores
1.24 sustained Petaflop performance
Mellanox end-to-end InfiniBand solutions
International Fusion Energy Research Centre (IFERC), EU(F4E) -
Japan Broader Approach collaboration - #20
Petaflop Mellanox Connected
© 2013 Mellanox Technologies 30 - Mellanox Confidential -
Tokyo Institute of Technology - #21
TSUBAME 2.0, first Petaflop system in Japan
1.2 PF performance
HP ProLiant SL390s G7 1400 servers
4,200 NVIDIA Tesla M2050 Fermi-based GPUs
Mellanox 40Gb/s InfiniBand
Petaflop Mellanox Connected
© 2013 Mellanox Technologies 31 - Mellanox Confidential -
Commissariat a l'Energie Atomique (CEA) - #25
Tera 100, first Petaflop system in Europe - 1.05 PF performance
4,300 Bull S Series servers
140,000 Intel® Xeon® 7500 processing cores
300TB of central memory, 20PB of storage
Mellanox 40Gb/s InfiniBand
Petaflop Mellanox Connected
© 2013 Mellanox Technologies 32 - Mellanox Confidential -
Thank You