54
Designing High-Performance MPI Libraries for Multi-/Many-core Era Talk at IXPUG-Fall Conference (September ‘18) J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni and D. K Panda The Ohio State University E-mail: {Hashmi.29,chakraborty.52,bayatpour.1,subramoni.1,panda.2}@osu.edu

Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

Designing High-Performance MPI Libraries for Multi-/Many-core Era

Talk at IXPUG-Fall Conference (September ‘18)

J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni and D. K Panda

The Ohio State UniversityE-mail: {Hashmi.29,chakraborty.52,bayatpour.1,subramoni.1,panda.2}@osu.edu

Page 2: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 2Network Based Computing Laboratory

Big Data (Hadoop, Spark,

HBase, Memcached,

etc.)

Deep Learning(Caffe, TensorFlow, BigDL,

etc.)

HPC (MPI, RDMA, Lustre, etc.)

Increasing Usage of HPC, Big Data and Deep Learning

Convergence of HPC, Big Data, and Deep Learning!

Increasing Need to Run these applications on the Cloud!!

Page 3: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 3Network Based Computing Laboratory

Parallel Programming Models Overview

P1 P2 P3

Shared Memory

P1 P2 P3

Memory Memory Memory

P1 P2 P3

Memory Memory MemoryLogical shared memory

Shared Memory Model

SHMEM, DSMDistributed Memory Model

MPI (Message Passing Interface)

Partitioned Global Address Space (PGAS)

Global Arrays, UPC, Chapel, X10, CAF, …

• Programming models provide abstract machine models

• Models can be mapped on different types of systems– e.g. Distributed Shared Memory (DSM), MPI within a node, etc.

• PGAS models and Hybrid MPI+PGAS models are gradually receiving importance

Page 4: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 4Network Based Computing Laboratory

Supporting Programming Models for Multi-Petaflop and Exaflop Systems: Challenges

Programming ModelsMPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,

OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.

Application Kernels/Applications

Networking Technologies(InfiniBand, 40/100GigE,

Aries, and Omni-Path)

Multi-/Many-coreArchitectures

Accelerators(GPU and FPGA)

MiddlewareCo-Design

Opportunities and

Challenges across Various

Layers

PerformanceScalabilityResilience

Communication Library or Runtime for Programming ModelsPoint-to-point

CommunicationCollective

CommunicationEnergy-

AwarenessSynchronization

and LocksI/O and

File SystemsFault

Tolerance

Page 5: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 5Network Based Computing Laboratory

• Scalability for million to billion processors– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)– Scalable job start-up– Low memory footprint

• Scalable Collective communication– Offload– Non-blocking– Topology-aware

• Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)– Multiple end-points per node

• Support for efficient multi-threading• Integrated Support for Accelerators (GPGPUs and FPGAs)• Fault-tolerance/resiliency• QoS support for communication and I/O• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM,

MPI+UPC++, CAF, …)• Virtualization • Energy-Awareness

Broad Challenges in Designing Runtimes for (MPI+X) at Exascale

Page 6: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 6Network Based Computing Laboratory

• Extreme Low Memory Footprint– Memory per core continues to decrease

• D-L-A Framework

– Discover• Overall network topology (fat-tree, 3D, …), Network topology for processes for a given job• Node architecture, Health of network and node

– Learn• Impact on performance and scalability• Potential for failure

– Adapt• Internal protocols and algorithms• Process mapping• Fault-tolerance solutions

– Low overhead techniques while delivering performance, scalability and fault-tolerance

Additional Challenges for Designing Exascale Software Libraries

Page 7: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 7Network Based Computing Laboratory

Overview of the MVAPICH2 Project• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)

– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002

– MVAPICH2-X (MPI + PGAS), Available since 2011

– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014

– Support for Virtualization (MVAPICH2-Virt), Available since 2015

– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015

– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015

– Used by more than 2,950 organizations in 86 countries

– More than 494,000 (> 0.49 million) downloads from the OSU site directly

– Empowering many TOP500 clusters (Jul ‘18 ranking)

• 2nd ranked 10,649,640-core cluster (Sunway TaihuLight) at NSC, Wuxi, China

• 12th, 556,104 cores (Oakforest-PACS) in Japan

• 15th, 367,024 cores (Stampede2) at TACC

• 24th, 241,108-core (Pleiades) at NASA and many others

– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)

– http://mvapich.cse.ohio-state.edu

• Empowering Top500 systems for over a decadePartner in the upcoming Frontera System

Page 8: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 8Network Based Computing Laboratory

Architecture of MVAPICH2 Software Family

High Performance Parallel Programming Models

Message Passing Interface(MPI)

PGAS(UPC, OpenSHMEM, CAF, UPC++)

Hybrid --- MPI + X(MPI + PGAS + OpenMP/Cilk)

High Performance and Scalable Communication RuntimeDiverse APIs and Mechanisms

Point-to-point

Primitives

Collectives Algorithms

Energy-

Awareness

Remote Memory Access

I/O and

File Systems

Fault

ToleranceVirtualization Active

MessagesJob Startup

Introspection & Analysis

Support for Modern Networking Technology(InfiniBand, iWARP, RoCE, Omni-Path)

Support for Modern Multi-/Many-core Architectures(Intel-Xeon, OpenPower, Xeon-Phi, ARM, NVIDIA GPGPU)

Transport Protocols Modern Features

RC XRC UD DC UMR ODPSR-IOV

Multi Rail

Transport MechanismsShared

MemoryCMA IVSHMEM

Modern Features

MCDRAM* NVLink* CAPI*

* Upcoming

XPMEM

Page 9: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 9Network Based Computing Laboratory

MVAPICH2 Software Family

Requirements Library

MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2

Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS with IB, Omni-Path, and RoCE

MVAPICH2-X

MPI with IB, RoCE & GPU and Support for Deep Learning MVAPICH2-GDR

HPC Cloud with MPI & IB MVAPICH2-Virt

Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA

MPI Energy Monitoring Tool OEMT

InfiniBand Network Analysis and Monitoring OSU INAM

Microbenchmarks for Measuring MPI and PGAS Performance OMB

Page 10: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 10Network Based Computing Laboratory

• Scalability for million to billion processors– Support for highly-efficient inter-node and intra-node communication– Scalable Start-up– Optimized Collectives using SHArP and Multi-Leaders– Optimized CMA-based Collectives– Optimized XPMEM-based Collectives– SALaR: Scalable and Adaptive Designs for Large Message Reduction Collectives– Asynchronous Progress– MPI-T support

• Integrated Support for GPGPUs and Deep Learning• Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM• Application Scalability and Best Practices• High-Performance MPI Library for Cloud

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 11: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 11Network Based Computing Laboratory

One-way Latency: MPI over IB with MVAPICH2

00.20.40.60.8

11.21.41.61.8

2 Small Message Latency

Message Size (bytes)

Late

ncy

(us)

1.11

1.19

0.98

1.15

1.04

TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch

Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch

0

20

40

60

80

100

120TrueScale-QDRConnectX-3-FDRConnectIB-DualFDRConnectX-5-EDROmni-Path

Large Message Latency

Message Size (bytes)

Late

ncy

(us)

Page 12: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 12Network Based Computing Laboratory

TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 IB switch

Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch

Bandwidth: MPI over IB with MVAPICH2

0

5000

10000

15000

20000

25000TrueScale-QDRConnectX-3-FDRConnectIB-DualFDRConnectX-5-EDROmni-Path

Bidirectional Bandwidth

Band

wid

th

(MBy

tes/

sec)

Message Size (bytes)

22,564

12,16121,983

6,228

24,136

0

2000

4000

6000

8000

10000

12000

14000 Unidirectional Bandwidth

Band

wid

th

(MBy

tes/

sec)

Message Size (bytes)

12,590

3,373

6,356

12,35812,366

Page 13: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 13Network Based Computing Laboratory

0

20

40

60

8064 12

825

651

2 1K 2K 4K 8K 16K

32K

64K

128K

180K

230K

Tim

e (s

econ

ds)

Number of Processes

TACC Stampede2

MPI_Init Hello World

Startup Performance on KNL + Omni-Path

0

5

10

15

20

25

64 128

256

512 1K 2K 4K 8K 16K

32K

64K

Tim

e (s

econ

ds)

Number of Processes

Oakforest-PACS

MPI_Init Hello World

22s

5.8s

21s

57s

• MPI_Init takes 22 seconds on 231,936 processes on 3,624 KNL nodes (Stampede2 – Full scale)

• At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest-PACS)

• All numbers reported with 64 processes per node, MVAPICH2-2.3a• Designs integrated with mpirun_rsh, available for srun (SLURM launcher) as well

Page 14: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 14Network Based Computing Laboratory

0

0.1

0.2

0.3

0.4

(4,28) (8,28) (16,28)La

tenc

y (s

econ

ds)

(Number of Nodes, PPN)

MVAPICH2

Benefits of SHARP at Application Level

12%Avg DDOT Allreduce time of HPCG

SHARP support available since MVAPICH2 2.3a

Parameter Description DefaultMV2_ENABLE_SHARP=1 Enables SHARP-based collectives Disabled--enable-sharp Configure flag to enable SHARP Disabled

• Refer to Running Collectives with Hardware based SHARP support section of MVAPICH2 user guide for more information

• http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3b-userguide.html#x1-990006.26

Page 15: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 15Network Based Computing Laboratory

MPI_Allreduce on KNL + Omni-Path (10,240 Processes)

0

50

100

150

200

250

300

4 8 16 32 64 128 256 512 1024 2048 4096

Late

ncy

(us)

Message SizeMVAPICH2 MVAPICH2-OPT IMPI

0200400600800

100012001400160018002000

8K 16K 32K 64K 128K 256KMessage Size

MVAPICH2 MVAPICH2-OPT IMPI

OSU Micro Benchmark 64 PPN

2.4X

• For MPI_Allreduce latency with 32K bytes, MVAPICH2-OPT can reduce the latency by 2.4XM. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based Multi-Leader Design, SuperComputing '17. Available since MVAPICH2-X 2.3b

Page 16: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 16Network Based Computing Laboratory

Optimized CMA-based Collectives for Large Messages

1

10

100

1000

10000

100000

10000001K 2K 4K 8K 16

K32

K64

K12

8K25

6K51

2K 1M 2M 4MMessage Size

KNL (2 Nodes, 128 Procs)

MVAPICH2-2.3a

Intel MPI 2017

OpenMPI 2.1.0

Tuned CMA

Late

ncy

(us)

1

10

100

1000

10000

100000

1000000

1K 2K 4K 8K 16K

32K

64K

128K

256K

512K 1M 2M

Message Size

KNL (4 Nodes, 256 Procs)

MVAPICH2-2.3a

Intel MPI 2017

OpenMPI 2.1.0

Tuned CMA1

10

100

1000

10000

100000

1000000

1K 2K 4K 8K 16K

32K

64K

128K

256K

512K 1M

Message Size

KNL (8 Nodes, 512 Procs)

MVAPICH2-2.3a

Intel MPI 2017

OpenMPI 2.1.0

Tuned CMA

• Significant improvement over existing implementation for Scatter/Gather with 1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPower)

• New two-level algorithms for better scalability• Improved performance for other collectives (Bcast, Allgather, and Alltoall)

~ 2.5xBetter

~ 3.2xBetter

~ 4xBetter

~ 17xBetter

S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI Collectives for Multi/Many-core Systems, IEEE Cluster ’17, BEST Paper Finalist

Performance of MPI_Gather on KNL nodes (64PPN)

Available since MVAPICH2-X 2.3b

Page 17: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 17Network Based Computing Laboratory

Shared Address Space (XPMEM)-based Collectives Design

1

10

100

1000

10000

100000

16K 32K 64K 128K 256K 512K 1M 2M 4M

Late

ncy

(us)

Message Size

MVAPICH2-2.3bIMPI-2017v1.132MVAPICH2-X-2.3rc1

OSU_Allreduce (Broadwell 256 procs)

• “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2

• Offloaded computation/communication to peers ranks in reduction collective operation

• Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce

73.2

1.8X

1

10

100

1000

10000

100000

16K 32K 64K 128K 256K 512K 1M 2M 4MMessage Size

MVAPICH2-2.3bIMPI-2017v1.132MVAPICH2-2.3rc1

OSU_Reduce (Broadwell 256 procs)

4X

36.1

37.9

16.8

J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018.

Available in MVAPICH2-X 2.3rc1

Page 18: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 18Network Based Computing Laboratory

Application-Level Benefits of XPMEM-Based Collectives

MiniAMR (Broadwell, ppn=16)

• Up to 20% benefits over IMPI for CNTK DNN training using AllReduce• Up to 27% benefits over IMPI and up to 15% improvement over MVAPICH2 for

MiniAMR application kernel

0

200

400

600

800

28 56 112 224

Exec

utio

n Ti

me

(s)

No. of Processes

Intel MPIMVAPICH2MVAPICH2-XPMEM

CNTK AlexNet Training (Broadwell, B.S=default, iteration=50, ppn=28)

0

20

40

60

80

16 32 64 128 256

Exec

utio

n Ti

me

(s)

No. of Processes

Intel MPIMVAPICH2MVAPICH2-XPMEM20%

9%

27%

15%

Page 19: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 19Network Based Computing Laboratory

• CPU-based training of AlexNet neural network using ImageNet ILSVRC2012 dataset

• SALaR designs show up to 46%improved performance over MVAPICH2 at 896 processes

• The proposed designs show good scalability with increasing system size

Impact of SALaR (Scalable Large Message Collectives) Designs on CNTK

CNTK Samples per Second (higher is better)

46%

M. Bayatpour, J. Hashmi, S. Chakraborty, P. Kousha, H. Subramoni, and D. K. Panda, SALaR: Scalable and Adaptive Designs for Large Message Reduction Collectives, IEEE

Cluster ‘18 (BEST Paper, Architecture) Will be available in future MVAPICH2 Release

Page 20: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 20Network Based Computing Laboratory

Benefits of A New Asynchronous Progress Design: SPEC MPI 2008

• Up to 25 % performance improvement for SPECMPI applications on 384 processes with KNL + Omni-Path• Up to 38 % performance improvement for SPECMPI applications on 384 processes with Skylake + Omni-Path

SPEC MPI : KNL + Omni-Path SPEC MPI : Skylake + Omni-Path

18%

29%

10%

38%

13%

25%

384 Processes ( 8 Nodes : 48 PPN )384 Processes ( 6 Nodes : 64 PPN )

Available in MVAPICH2-X 2.3rc1

A. Ruhela, H. Subramoni, S. Chakraborty, M. Bayatpour, P. Kousha, and D. K. Panda, Efficient Asynchronous Communication Progress for MPI without Dedicated Resources, EuroMPI ‘18

Page 21: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 21Network Based Computing Laboratory

Benefits of the New Asynchronous Progress Design: P3DFFT

Up to 44% performance improvement with the P3DFFT application with 448 processes

Broadwell + InfiniBand

12%6%

44%

33%

( 28 PPN )

Page 22: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 22Network Based Computing Laboratory

● Enhance existing support for MPI_T in MVAPICH2 to expose a richer set of performance and control variables

● Get and display MPI Performance Variables (PVARs) made available by the runtime in TAU

● Control the runtime’s behavior via MPI Control Variables (CVARs)● Introduced support for new MPI_T based CVARs to MVAPICH2

○ MPIR_CVAR_MAX_INLINE_MSG_SZ, MPIR_CVAR_VBUF_POOL_SIZE, MPIR_CVAR_VBUF_SECONDARY_POOL_SIZE

● TAU enhanced with support for setting MPI_T CVARs in a non-interactive mode for uninstrumented applications

● S. Ramesh, A. Maheo, S. Shende, A. Malony, H. Subramoni, and D. K. Panda, MPI Performance Engineering with the MPI Tool Interface: the Integration of MVAPICH and TAU, EuroMPI/USA ‘17, Best Paper Finalist

Performance Engineering Applications using MVAPICH2 and TAU

VBUF usage without CVAR based tuning as displayed by ParaProf VBUF usage with CVAR based tuning as displayed by ParaProf

Page 23: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 23Network Based Computing Laboratory

• Scalability for million to billion processors• Integrated Support for GPGPUs and Deep Learning• Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM• Application Scalability and Best Practices• High-Performance MPI Library for Cloud

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 24: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 24Network Based Computing Laboratory

At Sender:

At Receiver:MPI_Recv(r_devbuf, size, …);

insideMVAPICH2

• Standard MPI interfaces used for unified data movement

• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)

• Overlaps data movement from GPU with RDMA transfers

High Performance and High Productivity

MPI_Send(s_devbuf, size, …);

GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU

Page 25: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 25Network Based Computing Laboratory

CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.3 Releases• Support for MPI communication from NVIDIA GPU device memory• High performance RDMA-based inter-node point-to-point communication

(GPU-GPU, GPU-Host and Host-GPU)• High performance intra-node point-to-point communication for multi-GPU

adapters/node (GPU-GPU, GPU-Host and Host-GPU)• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node

communication for multiple GPU adapters/node• Optimized and tuned collectives for GPU device buffers• MPI datatype support for point-to-point and collective communication from

GPU device buffers• Unified memory

Page 26: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 26Network Based Computing Laboratory

0

2000

4000

6000

1 2 4 8 16 32 64 128

256

512 1K 2K 4K

Band

wid

th (M

B/s)

Message Size (Bytes)

GPU-GPU Inter-node Bi-Bandwidth

MV2-(NO-GDR) MV2-GDR-2.3rc1

01000200030004000

1 2 4 8 16 32 64 128

256

512 1K 2K 4K

Band

wid

th (M

B/s)

Message Size (Bytes)

GPU-GPU Inter-node Bandwidth

MV2-(NO-GDR) MV2-GDR-2.3rc1

0

10

20

300 1 2 4 8 16 32 64 128

256

512 1K 2K 4K 8K

Late

ncy

(us)

Message Size (Bytes)

GPU-GPU Inter-node Latency

MV2-(NO-GDR) MV2-GDR 2.3rc1

MVAPICH2-GDR-2.3Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores

NVIDIA Volta V100 GPUMellanox Connect-X4 EDR HCA

CUDA 9.0Mellanox OFED 4.0 with GPU-Direct-RDMA

10x

9x

Optimized MVAPICH2-GDR Design

1.88us11X

Page 27: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 27Network Based Computing Laboratory

• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)• HoomdBlue Version 1.0.5

• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768 MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384

Application-Level Evaluation (HOOMD-blue)

0

500

1000

1500

2000

2500

4 8 16 32

Aver

age

Tim

e St

eps p

er

seco

nd (T

PS)

Number of Processes

MV2 MV2+GDR

0500

100015002000250030003500

4 8 16 32Aver

age

Tim

e St

eps p

er

seco

nd (T

PS)

Number of Processes

64K Particles 256K Particles

2X2X

Page 28: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 28Network Based Computing Laboratory

Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland

0

0.2

0.4

0.6

0.8

1

1.2

16 32 64 96Nor

mal

ized

Exec

utio

n Ti

me

Number of GPUs

CSCS GPU cluster

Default Callback-based Event-based

00.20.40.60.8

11.2

4 8 16 32

Nor

mal

ized

Exec

utio

n Ti

me

Number of GPUs

Wilkes GPU Cluster

Default Callback-based Event-based

• 2X improvement on 32 GPUs nodes• 30% improvement on 96 GPU nodes (8 GPUs/node)

C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data Movement Processing on Modern GPU-enabled Systems, IPDPS’16

On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application

Cosmo model: http://www2.cosmo-model.org/content/tasks/operational/meteoSwiss/

Page 29: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 29Network Based Computing Laboratory

• MVAPICH2-GDR offers excellent performance via advanced designs for MPI_Allreduce.

• Up to 22% better performance on Wilkes2 cluster (16 GPUs)

Exploiting CUDA-Aware MPI for TensorFlow (Horovod)

0

500

1000

1500

2000

2500

3000

3500

1 2 4 8 16

Imag

es/s

ec (h

ighe

r is b

ette

r)

No. of GPUs (4GPUs/node)

MVAPICH2 MVAPICH2-GDR

MVAPICH2-GDR is up to 22% faster than MVAPICH2

Page 30: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 30Network Based Computing Laboratory

0

10000

20000

30000

40000

50000

512K 1M 2M 4M

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2 BAIDU OPENMPI

0100000020000003000000400000050000006000000

8388

608

1677

7216

3355

4432

6710

8864

1342

1772

8

2684

3545

6

5368

7091

2

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2 BAIDU OPENMPI

1

10

100

1000

10000

100000

4 16 64 256

1024

4096

1638

4

6553

6

2621

44

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2 BAIDU OPENMPI

• 16 GPUs (4 nodes) MVAPICH2-GDR vs. Baidu-Allreduce and OpenMPI 3.0

MVAPICH2-GDR: Allreduce Comparison with Baidu and OpenMPI

*Available since MVAPICH2-GDR 2.3a

~30X betterMV2 is ~2X better

than Baidu

~10X better OpenMPI is ~5X slower than Baidu

~4X better

Page 31: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 31Network Based Computing Laboratory

MVAPICH2-GDR vs. NCCL2 – Allreduce Operation• Optimized designs in MVAPICH2-GDR 2.3rc1 offer better/comparable performance for most cases

• MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 16 GPUs

1

10

100

1000

10000

100000

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2-GDR NCCL2

~1.2X better

Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 1 K-80 GPUs, and EDR InfiniBand Inter-connect

1

10

100

1000

4 8 16 32 64 128

256

512 1K 2K 4K 8K 16K

32K

64K

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2-GDR NCCL2

~3X better

Page 32: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 32Network Based Computing Laboratory

• To address the limitations of Caffe and existing MPI runtimes, we propose the OSU-Caffe (S-Caffe) framework

• At the application (DL framework) level

– Develop a fine-grain workflow – i.e. layer-wise communication instead of communicating the entire model

• At the runtime (MPI) level

– Develop support to perform reduction of very-large GPU buffers

– Perform reduction using GPU kernels

OSU-Caffe: Proposed Co-Design Overview

OSU-Caffe is available from the HiDL project pagehttp://hidl.cse.ohio-state.edu

Page 33: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 33Network Based Computing Laboratory

S-Caffe vs. Inspur-Caffe and Microsoft CNTK• AlexNet: Notoriously hard to scale-out on

multiple nodes due to comm. overhead!• Large number of parameters ~ 64 Million

(comm. buffer size = 256 MB)

S-Caffe delivers better or comparable performance withother multi-node capable DL frameworks

Up to 14% improvement (Scale-up)

Impact of HR

• GoogLeNet is a popular DNN• 13 million parameters (comm. buffer

size = ~50 MB)

Page 34: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 34Network Based Computing Laboratory

• Scalability for million to billion processors• Integrated Support for GPGPUs and Deep Learning• Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM• Application Scalability and Best Practices• High-Performance MPI Library for Cloud

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 35: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 35Network Based Computing Laboratory

0

0.5

1

1.5

0 1 2 4 8 16 32 64 128 256 512 1K 2K

Late

ncy

(us)

MVAPICH2-2.3SpectrumMPI-10.1.0.2OpenMPI-3.0.0

Intra-node Point-to-Point Performance on OpenPower

Platform: Two nodes of OpenPOWER (Power8-ppc64le) CPU using Mellanox EDR (MT4115) HCA

Intra-Socket Small Message Latency Intra-Socket Large Message Latency

Intra-Socket Bi-directional BandwidthIntra-Socket Bandwidth

0.30us0

20

40

60

80

4K 8K 16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

MVAPICH2-2.3SpectrumMPI-10.1.0.2OpenMPI-3.0.0

0

10000

20000

30000

40000

1 8 64 512 4K 32K 256K 2M

Band

wid

th (M

B/s)

MVAPICH2-2.3SpectrumMPI-10.1.0.2OpenMPI-3.0.0

0

20000

40000

60000

80000

1 8 64 512 4K 32K 256K 2M

Band

wid

th (M

B/s)

MVAPICH2-2.3SpectrumMPI-10.1.0.2OpenMPI-3.0.0

Page 36: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 36Network Based Computing Laboratory

0

1

2

3

4

0 1 2 4 8 16 32 64 128 256 512 1K 2K

Late

ncy

(us)

MVAPICH2-2.3SpectrumMPI-10.1.0.2OpenMPI-3.0.0

Inter-node Point-to-Point Performance on OpenPower

Platform: Two nodes of OpenPOWER (Power8-ppc64le) CPU using Mellanox EDR (MT4115) HCA

Small Message Latency Large Message Latency

Bi-directional BandwidthBandwidth

0

50

100

150

200

4K 8K 16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

MVAPICH2-2.3SpectrumMPI-10.1.0.2OpenMPI-3.0.0

0

5000

10000

15000

1 8 64 512 4K 32K 256K 2M

Band

wid

th (M

B/s)

MVAPICH2-2.3SpectrumMPI-10.1.0.2OpenMPI-3.0.0

0

10000

20000

30000

1 8 64 512 4K 32K 256K 2M

Band

wid

th (M

B/s)

MVAPICH2-2.3SpectrumMPI-10.1.0.2OpenMPI-3.0.0

Page 37: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 37Network Based Computing Laboratory

05

101520

1 2 4 8 16 32 64 128

256

512 1K 2K 4K 8K

Late

ncy

(us)

Message Size (Bytes)

INTRA-NODE LATENCY (SMALL)

INTRA-SOCKET(NVLINK) INTER-SOCKET

MVAPICH2-GDR: Performance on OpenPOWER (NVLink + Pascal)

010203040

1 4 16 64 256 1K 4K 16K

64K

256K 1M 4M

Band

wid

th (G

B/se

c)

Message Size (Bytes)

INTRA-NODE BANDWIDTH

INTRA-SOCKET(NVLINK) INTER-SOCKET

0

2

4

6

8

1 4 16 64 256 1K 4K 16K

64K

256K 1M 4M

Band

wid

th (G

B/se

c)

Message Size (Bytes)

INTER-NODE BANDWIDTH

Platform: OpenPOWER (ppc64le) nodes equipped with a dual-socket CPU, 4 Pascal P100-SXM GPUs, and 4X-FDR InfiniBand Inter-connect

0

200

400

16K 32K 64K 128K256K512K 1M 2M 4M

Late

ncy

(us)

Message Size (Bytes)

INTRA-NODE LATENCY (LARGE)

INTRA-SOCKET(NVLINK) INTER-SOCKET

0

10

20

30

1 2 4 8 16 32 64 128

256

512 1K 2K 4K 8K

Late

ncy

(us)

Message Size (Bytes)

INTER-NODE LATENCY (SMALL)

0200400600800

1000

Late

ncy

(us)

Message Size (Bytes)

INTER-NODE LATENCY (LARGE)

Intra-node Bandwidth: 33.2 GB/sec (NVLINK)Intra-node Latency: 13.8 us (without GPUDirectRDMA)

Inter-node Latency: 23 us (without GPUDirectRDMA) Inter-node Bandwidth: 6 GB/sec (FDR)Available since MVAPICH2-GDR 2.3a

Page 38: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 38Network Based Computing Laboratory

0

10

20

30

40

4 8 16 32 64 128 256 512 1K 2K 4K

Late

ncy

(us)

MVAPICH2-X-2.3rc1SpectrumMPI-10.1.0.2OpenMPI-3.0.0

Scalable Host-based Collectives with CMA on OpenPOWER (Intra-node Reduce & AlltoAll)

(Nod

es=1

, PPN

=20)

0

50

100

150

200

4 8 16 32 64 128 256 512 1K 2K 4K

Late

ncy

(us)

MVAPICH2-X-2.3rc1SpectrumMPI-10.1.0.2OpenMPI-3.0.0

(Nod

es=1

, PPN

=20)

Up to 5X and 3x performance improvement by MVAPICH2 for small and large messages respectively

3.6X

Alltoall

Reduce

5.2X

3.2X3.3X

0

400

800

1200

1600

2000

8K 16K 32K 64K 128K 256K 512K 1M

Late

ncy

(us)

MVAPICH2-X-2.3rc1SpectrumMPI-10.1.0.2OpenMPI-3.0.0

(Nod

es=1

, PPN

=20)

0

2500

5000

7500

10000

8K 16K 32K 64K 128K 256K 512K 1M

Late

ncy

(us)

MVAPICH2-X-2.3rc1SpectrumMPI-10.1.0.2OpenMPI-3.0.0

(Nod

es=1

, PPN

=20)

3.2X

1.4X

1.3X1.2X

Page 39: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 39Network Based Computing Laboratory

0

1000

2000

16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

MVAPICH2-X-2.3rc1

SpectrumMPI-10.1.0

OpenMPI-3.0.0

3X0

2000

4000

16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

Message Size

MVAPICH2-X-2.3rc1

SpectrumMPI-10.1.0

OpenMPI-3.0.0

34%

0

1000

2000

3000

4000

16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

Message Size

MVAPICH2-X-2.3rc1

SpectrumMPI-10.1.0

OpenMPI-3.0.0

0

1000

2000

16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

MVAPICH2-X-2.3rc1

SpectrumMPI-10.1.0

OpenMPI-3.0.0

Optimized All-Reduce with XPMEM on OpenPOWER(N

odes

=1, P

PN=2

0)

Optimized Runtime Parameters: MV2_CPU_BINDING_POLICY=hybrid MV2_HYBRID_BINDING_POLICY=bunch

• Optimized MPI All-Reduce Design in MVAPICH2– Up to 2X performance improvement over Spectrum MPI and 4X over OpenMPI for intra-node

2X

(Nod

es=2

, PPN

=20)

4X48%

3.3X

2X

2X

Page 40: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 40Network Based Computing Laboratory

0

0.5

1

1.5

0 1 2 4 8 16 32 64 128256512 1K 2K 4K

Late

ncy

(us)

MVAPICH2-2.3

Intra-node Point-to-point Performance on ARM Cortex-A72

0

2000

4000

6000

8000

10000

Band

wid

th (M

B/s) MVAPICH2-2.3

0

5000

10000

15000

20000

Bidi

rect

iona

l Ban

dwid

th MVAPICH2-2.3

Platform: ARM Cortex A72 (aarch64) MIPS processor with 64 cores dual-socket CPU. Each socket contains 32 cores.

Small Message Latency Large Message Latency

Bi-directional BandwidthBandwidth

0.27 micro-second (1 bytes)

0

200

400

600

800

8K 16K 32K 64K 128K 256K 512K 1M 2M 4M

Late

ncy

(us)

MVAPICH2-2.3

Page 41: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 41Network Based Computing Laboratory

• Scalability for million to billion processors• Integrated Support for GPGPUs and Deep Learning• Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM• Application Scalability and Best Practices• High-Performance MPI Library for Cloud

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 42: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 42Network Based Computing Laboratory

0

20

40

60

80

100

120

140

160

MILC Leslie3D POP2 LAMMPS WRF2 LU

Exec

utio

n Ti

me

in (s

)

Intel MPI 18.1.163

MVAPICH2-X-2.3rc1

31%

SPEC MPI 2007 Benchmarks: Broadwell + InfiniBand

MVAPICH2-X outperforms Intel MPI by up to 31%

Configuration: 448 processes on 16 Intel E5-2680v4 (Broadwell) nodes having 28 PPN and interconnected with 100Gbps Mellanox MT4115 EDR ConnectX-4 HCA

29% 5%

-12%1%

11%

Page 43: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 43Network Based Computing Laboratory

0

100

200

300

400

500

600

700

MILC Leslie3D POP2 LAMMPS WRF2 GAP Lu

Exec

utio

n Ti

me

in (s

)

Intel 18.0.2MVAPICH2-X-2.3rc1

-7%

10%

SPEC MPI 2007 Benchmarks: KNL + Omni-Path

MVAPICH2-X outperforms Intel MPI by up to 22%

Configuration : 384 processes on 8 nodes of Intel Xeon Phi 7250(KNL) with 48 processes per node. KNL contains 68 cores on a single socket and interconnects with 100Gb/sec Intel Omni-Path network.

-2%22%

15%

4%

10%

Page 44: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 44Network Based Computing Laboratory

Application Scalability on Skylake and KNL (Stamepede2)MiniFE (1300x1300x1300 ~ 910 GB)

Runtime parameters: MV2_SMPI_LENGTH_QUEUE=524288 PSM2_MQ_RNDV_SHM_THRESH=128K PSM2_MQ_RNDV_HFI_THRESH=128K

0

50

100

150

2048 4096 8192

Exec

utio

n Ti

me

(s)

No. of Processes (KNL: 64ppn)

MVAPICH2

0

20

40

60

2048 4096 8192

Exec

utio

n Ti

me

(s)

No. of Processes (Skylake: 48ppn)

MVAPICH2

0

500

1000

1500

48 96 192 384 768No. of Processes (Skylake: 48ppn)

MVAPICH2

NEURON (YuEtAl2012)

Courtesy: Mahidhar Tatineni @SDSC, Dong Ju (DJ) Choi@SDSC, and Samuel Khuvis@OSC ---- Testbed: TACC Stampede2 using MVAPICH2-2.3b

0

1000

2000

3000

4000

64 128 256 512 1024 2048 4096No. of Processes (KNL: 64ppn)

MVAPICH2

0

500

1000

1500

68 136 272 544 1088 2176 4352No. of Processes (KNL: 68ppn)

MVAPICH2

0

500

1000

1500

2000

48 96 192 384 768 1536 3072No. of Processes (Skylake: 48ppn)

MVAPICH2

Cloverleaf (bm64) MPI+OpenMP, NUM_OMP_THREADS = 2

Page 45: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 45Network Based Computing Laboratory

• MPI runtime has many parameters• Tuning a set of parameters can help you to extract higher performance• Compiled a list of such contributions through the MVAPICH Website

– http://mvapich.cse.ohio-state.edu/best_practices/

• Initial list of applications– Amber– HoomDBlue– HPCG– Lulesh– MILC– Neuron– SMG2000– Cloverleaf– SPEC (LAMMPS, POP2, TERA_TF, WRF2)

• Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu.• We will link these results with credits to you.

Applications-Level Tuning: Compilation of Best Practices

Page 46: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 46Network Based Computing Laboratory

• Scalability for million to billion processors• Integrated Support for GPGPUs and Deep Learning• Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM• Application Scalability and Best Practices• High-Performance MPI Library for Cloud

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 47: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 47Network Based Computing Laboratory

• Virtualization has many benefits– Fault-tolerance– Job migration– Compaction

• Have not been very popular in HPC due to overhead associated with Virtualization

• New SR-IOV (Single Root – IO Virtualization) support available with Mellanox InfiniBand adapters changes the field

• Enhanced MVAPICH2 support for SR-IOV• MVAPICH2-Virt 2.2 supports:

– OpenStack, Docker, and singularity

Can HPC and Virtualization be Combined?

J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based Virtualized InfiniBand Clusters? EuroPar'14J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Library over SR-IOV enabled InfiniBand Clusters, HiPC’14 J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build HPC Clouds, CCGrid’15

Page 48: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 48Network Based Computing Laboratory

0

50

100

150

200

250

300

350

400

milc leslie3d pop2 GAPgeofem zeusmp2 lu

Exec

utio

n Ti

me

(s)

MV2-SR-IOV-Def

MV2-SR-IOV-Opt

MV2-Native

1%9.5%

0

1000

2000

3000

4000

5000

6000

22,20 24,10 24,16 24,20 26,10 26,16

Exec

utio

n Ti

me

(ms)

Problem Size (Scale, Edgefactor)

MV2-SR-IOV-Def

MV2-SR-IOV-Opt

MV2-Native2%

• 32 VMs, 6 Core/VM

• Compared to Native, 2-5% overhead for Graph500 with 128 Procs

• Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs

Application-Level Performance on Chameleon

SPEC MPI2007Graph500

5%

A Release for Azure Coming Soon

Page 49: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 49Network Based Computing Laboratory

0

500

1000

1500

2000

2500

3000

22,16 22,20 24,16 24,20 26,16 26,20

BFS

Exec

utio

n Ti

me

(ms)

Problem Size (Scale, Edgefactor)

Graph500

Singularity

Native

0

50

100

150

200

250

300

CG EP FT IS LU MG

Exec

utio

n Ti

me

(s)

NPB Class D

Singularity

Native

• 512 Processes across 32 nodes

• Less than 7% and 6% overhead for NPB and Graph500, respectively

Application-Level Performance on Singularity with MVAPICH2

7%

6%

J. Zhang, X .Lu and D. K. Panda, Is Singularity-based Container Technology Ready for Running MPI Applications on HPC Clouds?,

UCC ’17, Best Student Paper Award

Page 50: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 50Network Based Computing Laboratory

0

5000

10000

15000

20000

Band

wid

th (M

B/s)

Message Size (Bytes)

GPU-GPU Inter-node Bi-Bandwidth

Docker Native

02000400060008000

1000012000

Band

wid

th (M

B/s)

Message Size (Bytes)

GPU-GPU Inter-node Bandwidth

Docker Native

1

10

100

1000

1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M

Late

ncy

(us)

Message Size (Bytes)

GPU-GPU Inter-node Latency

Docker Native

MVAPICH2-GDR-2.3aIntel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores

NVIDIA Volta V100 GPUMellanox Connect-X4 EDR HCA

CUDA 9.0Mellanox OFED 4.0 with GPU-Direct-RDMA

MVAPICH2-GDR on Container with Negligible Overhead

Works with NVIDIA HPC Container Makerhttps://github.com/NVIDIA/hpc-container-maker/blob/master/recipes/hpcbase-pgi-mvapich2.py

Page 51: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 51Network Based Computing Laboratory

MVAPICH2 – Plans for Exascale• Performance and Memory scalability toward 1M-10M cores• Hybrid programming (MPI + OpenSHMEM, MPI + UPC, MPI + CAF …)

• MPI + Task*• Enhanced Optimization for GPUs and FPGAs*• Taking advantage of advanced features of Mellanox InfiniBand

• Tag Matching*• Adapter Memory*

• Enhanced communication schemes for upcoming architectures• NVLINK*• CAPI*

• Extended topology-aware collectives• Extended Energy-aware designs and Virtualization Support• Extended Support for MPI Tools Interface (as in MPI 3.0)• Extended FT support• Support for * features will be available in future MVAPICH2 Releases

Page 52: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 52Network Based Computing Laboratory

Funding Acknowledgments

Funding Support by

Equipment Support by

Page 53: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 53Network Based Computing Laboratory

Personnel AcknowledgmentsCurrent Students (Graduate)

– A. Awan (Ph.D.)

– M. Bayatpour (Ph.D.)

– S. Chakraborthy (Ph.D.)

– C.-H. Chu (Ph.D.)– S. Guganani (Ph.D.)

Past Students – A. Augustine (M.S.)

– P. Balaji (Ph.D.)

– R. Biswas (M.S.)

– S. Bhagvat (M.S.)

– A. Bhat (M.S.)

– D. Buntinas (Ph.D.)

– L. Chai (Ph.D.)

– B. Chandrasekharan (M.S.)

– N. Dandapanthula (M.S.)

– V. Dhanraj (M.S.)

– T. Gangadharappa (M.S.)

– K. Gopalakrishnan (M.S.)

– R. Rajachandrasekar (Ph.D.)

– G. Santhanaraman (Ph.D.)

– A. Singh (Ph.D.)

– J. Sridhar (M.S.)

– S. Sur (Ph.D.)

– H. Subramoni (Ph.D.)

– K. Vaidyanathan (Ph.D.)

– A. Vishnu (Ph.D.)

– J. Wu (Ph.D.)

– W. Yu (Ph.D.)

– J. Zhang (Ph.D.)

Past Research Scientist– K. Hamidouche

– S. Sur

Past Post-Docs– D. Banerjee

– X. Besseron

– H.-W. Jin

– W. Huang (Ph.D.)

– W. Jiang (M.S.)

– J. Jose (Ph.D.)

– S. Kini (M.S.)

– M. Koop (Ph.D.)

– K. Kulkarni (M.S.)

– R. Kumar (M.S.)

– S. Krishnamoorthy (M.S.)

– K. Kandalla (Ph.D.)

– M. Li (Ph.D.)

– P. Lai (M.S.)

– J. Liu (Ph.D.)

– M. Luo (Ph.D.)

– A. Mamidala (Ph.D.)

– G. Marsh (M.S.)

– V. Meshram (M.S.)

– A. Moody (M.S.)

– S. Naravula (Ph.D.)

– R. Noronha (Ph.D.)

– X. Ouyang (Ph.D.)

– S. Pai (M.S.)

– S. Potluri (Ph.D.)

– J. Hashmi (Ph.D.)

– H. Javed (Ph.D.)– P. Kousha (Ph.D.)

– D. Shankar (Ph.D.)

– H. Shi (Ph.D.)

– J. Lin

– M. Luo

– E. Mancini

Current Research Scientists– X. Lu

– H. Subramoni

Past Programmers– D. Bureddy

– J. Perkins

Current Research Specialist– J. Smith

– M. Arnold

– S. Marcarelli

– J. Vienne

– H. Wang

Current Post-doc– A. Ruhela

– K. Manian

Current Students (Undergraduate)– V. Gangal (B.S.)

– M. Haupt (B.S.)

– N. Sarkauskas (B.S.)

Page 54: Designing High-Performance MPI Libraries for Multi-/Many ...mvapich.cse.ohio-state.edu/static/media/talks/slide/ixpug-portand_dk… · Designing High-Performance MPI Libraries for

IXPUG (Sept ’18) 54Network Based Computing Laboratory

Thank You!

Network-Based Computing Laboratoryhttp://nowlab.cse.ohio-state.edu/

[email protected]

The High-Performance MPI/PGAS Projecthttp://mvapich.cse.ohio-state.edu/

The High-Performance Deep Learning Projecthttp://hidl.cse.ohio-state.edu/

The High-Performance Big Data Projecthttp://hibd.cse.ohio-state.edu/