Achieving Higher Productivity on Abaqus Simulations for Small and Mid-size Enterprises
with HPC and Clustering Technologies
Pak Lui
Application Performance Manager
SIMULIA Regional User Meetings West Regional User Meeting
October 30, 2014
Hayes Mansion - San Jose, California
2
The HPC Advisory Council Update
• World-wide HPC non-profit organization (390+ members)
• Bridges the gap between HPC usage and its potential
• Provides best practices and a support/development center
• Explores future technologies and future developments
• Leading edge solutions and technology demonstrations
3
HPC Advisory Council Members
4
Centers of Excellence
5
Special Interest Subgroups
• HPC|Scale Subgroup
– Explore usage of commodity HPC as a
replacement for multi-million dollar
mainframes and proprietary based
supercomputers
• HPC|Cloud Subgroup
– Explore usage of HPC components as part of
the creation of external/public/internal/private
cloud computing environments.
• HPC|Works Subgroup
– Provide best practices for building balanced
and scalable HPC systems, performance
tuning and application guidelines.
• HPC|Storage Subgroup
– Demonstrate how to build high-performance
storage solutions and their affect on
application performance and productivity
• HPC|GPU Subgroup
– Explore usage models of GPU components
as part of next generation compute
environments and potential optimizations for
GPU based computing
• HPC|FSI Subgroup
– Explore the usage of high-performance
computing solutions for low latency trading,
productive simulations and overall more
efficient financial services
• HPC|Music
– To enable HPC in music production and to
develop HPC cluster solutions that further
enable the future of music production
6
HPC Advisory Council HPC Center
Dell™ PowerEdge™ R720xd/R720
33-node cluster HP Cluster Platform 3000SL
16-node cluster
HP ProLiant SL230s Gen8
4-node cluster
Dell™ PowerEdge™ R815
11-node cluster
Dell™ PowerEdge™ C6145
6-node cluster
Dell™
PowerEdge™
M610
38-node cluster Dell™ PowerEdge™ C6100
4-node cluster
Dell™ PowerVault
MD3420 / MD3460
InfiniBand-based
Lustre Storage
7
• LS-DYNA
• miniFE
• MILC
• MSC Nastran
• MR Bayes
• MM5
• MPQC
• NAMD
• Nekbone
• NEMO
• NWChem
• Octopus
• OpenAtom
• OpenFOAM
• MILC
• OpenMX
• PARATEC
• PFA
• PFLOTRAN
• Quantum ESPRESSO
• RADIOSS
• SPECFEM3D
• WRF
130 Applications Best Practices Published
• Abaqus
• AcuSolve
• Amber
• AMG
• AMR
• ABySS
• ANSYS CFX
• ANSYS FLUENT
• ANSYS Mechanics
• BQCD
• CCSM
• CESM
• COSMO
• CP2K
• CPMD
• Dacapo
• Desmond
• DL-POLY
• Eclipse
• FLOW-3D
• GADGET-2
• GROMACS
• Himeno
• HOOMD-blue
• HYCOM
• ICON
• Lattice QCD
• LAMMPS
8
University Award Program
• University award program – Universities are encouraged to submit proposals for advanced research – Once / twice a year, the HPC Advisory Council will select a few proposals
• Selected proposal will be provided with: – Exclusive computation time on the HPC Advisory Council’s Compute Center – Invitation to present in one of the HPC Advisory Council’s worldwide workshops – Publication of the research results on the HPC Advisory Council website
• 2010 award winner is Dr. Xiangqian Hu, Duke University – Topic: “Massively Parallel Quantum Mechanical Simulations for Liquid Water”
• 2011 award winner is Dr. Marco Aldinucci, University of Torino – Topic: “Effective Streaming on Multi-core by Way of the FastFlow Framework’
• 2012 award winner is Jacob Nelson, University of Washington – “Runtime Support for Sparse Graph Applications”
• 2013 award winner is Antonis Karalis – Topic: “Music Production using HPC”
• To submit a proposal – please check the HPC Advisory Council web site
9
ISC’14 – Student Cluster Competition
• University-based teams to compete and demonstrate the incredible
capabilities of state-of- the-art HPC systems and applications on the
ISC’14 show-floor
• The Student Cluster Challenge is designed to introduce the next
generation of students to the high performance computing world and
community
10
ISC’14 – Student Cluster Competition Teams
11
HPCAC - ISC’14
– Student Cluster Competition Teams
12
2014 HPC Advisory Council Conferences
• HPC Advisory Council (HPCAC)
– 370+ members, http://www.hpcadvisorycouncil.com/
– Application best practices, case studies
– Benchmarking center with remote access for users
– World-wide workshops
– Value add for your customers to stay up to date and
in tune to HPC market
• 2014 Conferences
– USA (Stanford University) – February 3, 2014
– Switzerland (CSCS) – March 31, 2014
– Brazil (University of São Paulo) – May 26, 2014
– Germany (ISC’14) – June 22, 2014
– Spain (BSC) – September 24, 2014
– Singapore (A*STAR) – October 7, 2014
– China (HPC China) – November 5, 2014
– South Africa – December 3, 2014
• For more information
– www.hpcadvisorycouncil.com
13
HPCAC Conferences 2014
14
Note
• The following research was performed under the HPC
Advisory Council activities
– Special thanks for: HP, Mellanox
• For more information on the supporting vendors solutions
please refer to:
– www.mellanox.com, http://www.hp.com/go/hpc
• For more information on the application:
– http://www.simulia.com
15
Abaqus by SIMULIA
• Abaqus Unified FEA product suite offers powerful and complete
solutions for both routine and sophisticated engineering problems
covering a vast spectrum of industrial applications
• The Abaqus analysis products listed below focus on:
– Nonlinear finite element analysis (FEA)
– Advanced linear and dynamics application problems
• Abaqus/Standard
– General-purpose FEA that includes broad range of analysis capabilities
• Abaqus/Explicit
– Nonlinear, transient, dynamic analysis of solids and structures using
explicit time integration
16
Objectives
• The presented research was done to provide best practices
– Abaqus performance benchmarking
– Interconnect performance comparisons
– CPU performance comparisons
– Understanding Abaqus communication patterns
• The presented results will demonstrate
– The scalability of the compute environment to provide nearly linear
application scalability
17
• HP ProLiant SL230s Gen8 4-node “Athena” cluster
– Processors: Dual-Socket 10-core Intel Xeon E5-2680v2 @ 2.8 GHz CPUs
– Memory: 32GB per node, 1600MHz DDR3 Dual-Ranked DIMMs
– OS: RHEL 6 Update 2, OFED 2.2-1.0.1 InfiniBand SW stack
• Mellanox Connect-IB FDR InfiniBand adapters
• Mellanox ConnectX-3 VPI Ethernet adapters
• Mellanox SwitchX SX6036 56Gb/s FDR InfiniBand and Ethernet VPI Switch
• MPI: Platform MPI 8.3 (vendor provided)
• Application: Abaqus 6.13-2 (unless otherwise stated)
• Benchmark Workload:
– Abaqus/Explicit benchmarks: E6: Concentric Spheres (244,124 elements)
– Abaqus/Standard benchmark: S2A Flywheel with centrifugal load (474,744 DoF)
Test Cluster Configuration
18
Item HP ProLiant SL230s Gen8 Server
Processor Two Intel® Xeon® E5-2600 v2 Series, 4/6/8/10/12 Cores,
Chipset Intel® Xeon E5-2600 v2 product family
Memory (256 GB), 16 DIMM slots, DDR3 up to 1600MHz, ECC
Max Memory 256 GB
Internal Storage
Two LFF non-hot plug SAS, SATA bays or
Four SFF non-hot plug SAS, SATA, SSD bays
Two Hot Plug SFF Drives (Option)
Max Internal Storage 8TB
Networking Dual port 1GbE NIC/ Single 10G Nic
I/O Slots One PCIe Gen3 x16 LP slot
1Gb and 10Gb Ethernet, IB, and FlexF abric options
Ports Front: (1) Management, (2) 1GbE, (1) Serial, (1) S.U.V port, (2)
PCIe, and Internal Micro SD card & Active Health
Power Supplies 750, 1200W (92% or 94%), high power chassis
Integrated Management iLO4
hardware-based power capping via SL Advanced Power Manager
Additional Features Shared Power & Cooling and up to 8 nodes per 4U chassis, single
GPU support, Fusion I/O support
Form Factor 16P/8GPUs/4U chassis
About HP ProLiant SL230s Gen8
19
Abaqus/Explicit Performance –CPU Generation
All measurement
based on v6.12.2
73%
• Intel E5-2680v2 (Ivy Bridge) cluster outperforms prior generations
– Performs 73% higher than Westmere (X5670) cluster at 4 nodes
– Performs 11% higher than Sandy Bridge (E5-2680) cluster at 4 nodes
• Hardware components used:
– Athena-IVB: 2-socket 10-core Intel E5-2680 @ 2.7GHz, 1600MHz DIMMs, FDR IB
– Athena-SNB: 2-socket 8-core Intel E5-2680 @ 2.7GHz, 1600MHz DIMMs, FDR IB
– Plutus-WSM: 2-socket 6-core Intel X5670 @ 2.93GHz, 1333MHz DIMMs, QDR IB
Performance Rating:
based on total
runtime
11%
20
Abaqus/Explicit Performance –CPU Generation
All measurement
based on v6.12.2
22%
• The latest cluster with Haswell CPU performs prior generations
– Performs 22% higher than Ivy Bridge (E5-2680v2) cluster at 4 nodes
• Hardware components used:
– “Haswell”: 2-socket 14-core Intel E5-2697v3 @ 2.6GHz, 2133MHz DIMMs, FDR IB
– “Ivy Bridge”: 2-socket 10-core Intel E5-2680v2 @ 2.7GHz,1600MHz DIMMs, FDR IB
– “Sandy Bridge”: 2-socket 8-core Intel E5-2680 @ 2.7GHz,1600MHz DIMMs, FDR IB
– “Westmere”: 2-socket 6-core Intel X5670 @ 2.93GHz, 1333MHz DIMMs, QDR IB
Performance Rating:
based on total
runtime
21
Abaqus/Explicit Performance vs Power
• Modest dependency on CPU frequencies for Explicit case
– Approximately 23% of the gain between 2000MHz vs 2800MHz
– Tradeoff: About 51% of power needed to run from 2GHz to 2.8GHz CPUs
• Enabling Turbo Mode provides little difference performance for Explicit
– Turbo model provides ~4% of performance gain (for CPU cores running at 2800 MHz)
– It also resulted in 18% of higher power usage
Higher is better
18%
51% 23% 4%
FDR InfiniBand
22
Abaqus/Standard Performance vs Power
• Similar behavior regarding CPU frequency and Turbo mode is seen with Standard
– Up to 25% of the improvement at 51% of higher power utilization
– Small gain (5%) of performance when using Turbo Mode, at 18% higher power usage
• Using kernel tools called “msr-tools” to adjust Turbo Mode dynamically
– Allows dynamically turn off/on Turbo mode in the OS level
– Turbo Mode: Boosting core frequency; consequently resulted in higher power
consumption
Higher is better
18% 51% 25%
5%
FDR InfiniBand
23
Abaqus/Standard Performance - Interconnects
• InfiniBand enables higher cluster productivity among the interconnects tested
– Reducing the runtime by 183% versus 1GbE
– Up to 28% higher performance versus 10GbE
– Up to 18% higher performance versus 40GbE
Higher is better
18% 28%
183%
24
Abaqus/Explicit Performance - Interconnect
• FDR InfiniBand is the most efficient network interconnect for Abaqus/Explicit
– FDR IB outperforms 1GbE by 142%, 10GbE by 34%, and 40GbE by 30% at 4 nodes
– InfiniBand reduces communication time; provides more time for computation
– With RDMA, IB offloads CPU from communications, thus CPU can focus on computation
Higher is better
30% 34%
142%
25
Abaqus/Explicit Profiling – MPI Time Ratio
• FDR InfiniBand reduces the MPI communication time
– InfiniBand FDR consumes about 37% of total runtime at 4 nodes
– Ethernet solutions consume from 53% to 72% at 4 nodes
26
Abaqus/Explicit Profiling – Time Spent in MPI
• Abaqus/Explicit: More time spent on MPI collective operations:
– InfiniBand FDR: MPI_Gather(54%), MPI_Allreduce(9%), MPI_Scatterv(9%)
– 1GbE: MPI_Gather(43%), MPI_Irecv(13%), MPI_Scatterv(14%)
27
Abaqus/Explicit Profiling – MPI calls
• Abaqus/Standard uses a wide range of MPI APIs
– MPI_Test dominates the MPI function calls (over 97%)
• Abaqus/Explicit shows high usage for testing non-blocking messages
– MPI_Iprobe (95%), MPI_Test (3%)
28
Abaqus/Explicit Profiling – Time Spent in MPI
• Abaqus/Standard shows high usage for testing non-blocking messages
• MPI_Gather dominates the MPI communication time in Abaqus/Explicit
29
Abaqus Profiling – Message Sizes
• Abaqus/Standard uses small and medium MPI message sizes
– Most message sizes are between 0B to 64B, and 65B to 256B
– Some medium size concentration in 64KB to 256KB
• Abaqus/Explicit shows a wide distribution of small message sizes
– Small messages peak in the range from 65B to 256B
30
Abaqus Performance – Software Versions
Higher is better
• Abaqus/Explicit performs faster than the previous version
– 6.13-2 outperforms 6.12-2 by 8% at 4 nodes / 80 cores
• Difference Abaqus/Standard with the newer version is not as clear
– Only slight gain can be seen on the dataset tested
8%
31
Abaqus Performance – GPU
• GPU Support for Abaqus/Standard since 2011
• Current supported version: Abaqus 6.14
– Direct Sparse solvers – symmetric & unsymmetric
– Multi-GPU/node; multi-node DMP clusters
– Flexibility to run jobs on specific GPUs
• Customer adoptions increasing across industry segments
• Abaqus GPU licensing based on tokens
– Same token scheme for CPU core & GPU
• Performance gains vary: 2-3x on average is common
32
Abaqus Summary
• HPC clustering technology allows Abaqus Simulations to achieve Higher Productivity
– Minimize time-to-solution by deploying high performance systems to run simulation in parallel
• CPU (Generations, Frequencies, Turbo mode)
– Intel E5-2680v2 (Ivy Bridge) cluster outperforms prior generations
• Ivy Bridge provides up to 11% higher performance than Sandy Bridge, 73% higher than Westmere
– Modest increase on core frequencies: 23% gain (from 2GHz vs 2.8GHz). Tradeoff at 51% power needed
– Limited gain (5%) of performance when using Turbo Mode, at 18% higher power usage
– Cluster with Haswell outperforms the Ivy Bridge (E5-2680v2) cluster by 22% at 4 nodes
• IB is the most efficient cluster interconnect for Abaqus
– Abaqus/Explicit: FDR IB outperforms 1GbE by 142%, 10GbE by 34%, and 40GbE by 30% at 4 nodes
– Abaqus/Standard: FDR IB reduces the runtime by 183% on 1GbE, 28% on 10GbE, and 18% on 40GbE
– InfiniBand reduces communication time; provides more time for computation
– InfiniBand consumes 37% of total time, compared to 53-72% for Ethernet solutions
• Abaqus/Explicit performs faster than the previous version
– Version 6.13-2 outperforms version 6.12-2 by 8% at 4 nodes / 80 cores
33 33
Thank You HPC Advisory Council
All trademarks are property of their respective owners. All information is provided “As-Is” without any kind of warranty. The HPC Advisory Council makes no representation to the accuracy and
completeness of the information contained herein. HPC Advisory Council undertakes no duty and assumes no obligation to update or correct any information presented herein