22
HPE Deep Learning Cookbook: Recipes to Run Deep Learning Workloads Natalia Vassilieva, Sergey Serebryakov

HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

HPE Deep Learning Cookbook: Recipes to Run Deep Learning WorkloadsNatalia Vassilieva, Sergey Serebryakov

Page 2: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

Hardware

Software

Deep learning ecosystem today

2

Page 3: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

HPE’s portfolio for deep learning

3

Compute ideal for training models in data center Edge analytics and inference engine

Compute for both training models and inference at edge

HPE Apollo 6500

HPC Storage Choice of Fabrics

HPE SGI 8600

Government, academia and industries

Financial services

Life Sciences,Health

Government and academia

Autonomous vehicles / Mfg.

AI Software Framework

HPE Apollo 4520

Arista Networking

– Intel® Omni-Path Architecture

– Mellanox InfiniBand

– HPE FlexFabric Network

HPC Data Management Framework Software

Large-scale, storage virtualization & tiered data management platform

Petaflop scale for deep learning and HPC

The enterprise bridge to accelerated computing

HPE Apollo 2000The bridge to enterprise scale-out architecture

HPE Edgeline EL4000 Unprecedented deep edge compute and high capacity storage; open standards

Advisory, professional and operational services, HPE Flexible Capacity, HPE Datacenter Care for Hyperscale

HPE Apollo sx40Maximize GPU capacity and performance with lower TCO

Easy Setup and Flexible OSUsing Bright Computing’s distribution of deep learning software development components and workload management tool integration

Page 4: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

4

How to pick the right hardware/software stack?

Page 5: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

One size does NOT fit all

5

Data type

Data size

ApplicationModel (topology of artificial neural network):

− How many layers

− How many neurons per layer

− Connections between neurons (types of layers)

Page 6: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

Popular models

6

Name Type Model size(# params) Model size (MB) GFLOPs

(forward pass)

AlexNet CNN 60,965,224 233 MB 0.7

GoogleNet CNN 6,998,552 27 MB 1.6

VGG-16 CNN 138,357,544 528 MB 15.5

VGG-19 CNN 143,667,240 548 MB 19.6

ResNet50 CNN 25,610,269 98 MB 3.9

ResNet101 CNN 44,654,608 170 MB 7.6

ResNet152 CNN 60,344,387 230 MB 11.3

Eng Acoustic Model RNN 34,678,784 132 MB 0.035

TextCNN CNN 151,690 0.6 MB 0.009

Page 7: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

Popular models

7

Name Type Model size(# params) Model size (MB) GFLOPs

(forward pass)

AlexNet CNN 60,965,224 233 MB 0.7

GoogleNet CNN 6,998,552 27 MB 1.6

VGG-16 CNN 138,357,544 528 MB 15.5

VGG-19 CNN 143,667,240 548 MB 19.6

ResNet50 CNN 25,610,269 98 MB 3.9

ResNet101 CNN 44,654,608 170 MB 7.6

ResNet152 CNN 60,344,387 230 MB 11.3

Eng Acoustic Model RNN 34,678,784 132 MB 0.035

TextCNN CNN 151,690 0.6 MB 0.009

Page 8: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

Distributed training with data parallelism

8

Compute gradients

Aggregate model

updates

Fetch data

• Mini-batch size• IO bandwidth & latency• Pre-processing on a fly

• Replica batch size• Forward/backward

computational complexity• Computational power of a

node

• Model size• Number of workers/nodes• Interconnect bandwidth & latency

Page 9: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

HPE Deep Learning CookbookOverview

9

Comprehensive set of tools to guide the choice of the best hardware and software environment for different deep learning workloads.

‒ Eliminate the “guesswork” with Deep Learning Performance Guide: tap into a massive pool of performance data to reason on optimal hardware configuration for your workload

‒ Validate the configuration of your hardware and software environment with Deep Learning Benchmarking Suite

‒ Get started fast with one of our Reference Designs as a default technology recipe

https://developer.hpe.com/platform/hpe-deep-learning-cookbook/home

Page 10: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

HPE Deep Learning CookbookMain components

10

HPE Deep Learning Performance Guide

HPE Deep Learning Benchmarking Suite Reference Designs

Automated benchmarking tool to collect performance of different deep learning workloads on various

hardware and software configurations.

Available on GitHub

A web-based tool to guide a choice of optimal hardware

and software configuration via analysis of collected

performance data and applying performance models.

To be released in 2018 Will be hosted by HPE

Reference hardware/software stacks for particular classes of

deep learning workloads.

Image Classification Reference Designs released

Page 11: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

HPE Deep Learning Benchmarking Suite

11

Page 12: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

HPE Deep Learning Benchmarking Suite

12

A tool to benchmark various DL frameworks and models. To be open sourced in November 2017• 8 frameworks, 1 inference runtime

• TensorFlow, BVLC/NVIDIA/Intel Caffe, Caffe2, MXNet, PyTorch, TensorRT• 18 models (AlexNet, GoogleNet, ResNets, VGGs …)• Host / docker benchmarks• Docker files for all supported frameworks / SW combinations

Resource monitoring:• GPU/CPU/memory utilization

Report builders:• Weak/strong scaling• Benchmark statistics and charts

More information:• GitHub https://github.com/HewlettPackard/dlcookbook-dlbs• Documentation https://hewlettpackard.github.io/dlcookbook-dlbs

Page 13: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

Benchmarking Suite Architecture

13

GitHub: https://github.com/HewlettPackard/dlcookbook-dlbsDocumentation: https://hewlettpackard.github.io/dlcookbook-dlbs

Which benchmarks

to run

Configures benchmarks, runs one at a time

Mediators between experimenter and frameworks

Runs inference and training

Standard or custom frameworks

(Docker and/or host)

Experimenter

Default parameters

Benchmarks specification

Caffelauncher

Caffe2 launcher

TensorRTlauncher

Caffe

Caffe2 Benchmarks

TensorRT Benchmarks

MXNetlauncher

MXNet Benchmarks

Caffe2

TensorRT

MXNet

TensorFlowlauncher

TF CNN Benchmarks TensorFlow

Caffe2 launcher

PyTorchBenchmarks PyTorch

Page 14: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

How to expand

14

GitHub: https://github.com/HewlettPackard/dlcookbook-dlbsDocumentation: https://hewlettpackard.github.io/dlcookbook-dlbs

Experimenter

Default parameters

Benchmarks specification

Caffelauncher

Caffe2 launcher

TensorRTlauncher

Caffe

Caffe2 Benchmarks

TensorRT Benchmarks

MXNetlauncher

MXNet Benchmarks

Caffe2

TensorRT

MXNet

TensorFlowlauncher

TF CNN Benchmarks TensorFlow

Caffe2 launcher

PyTorchBenchmarks PyTorch

My TF launcher

My TF workload

CNTK launcher

CNTK Benchmarks CNTK

Page 15: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

Quick Start

15

Page 16: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

HPE Deep Learning Performance Guide

16

Page 17: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

17

Page 18: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

Demo

18

Page 19: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

Performance Models (TensorFlow)

19

Performance and scalability models for deep learning workloads in different hardware and software environments.

• Computation models are based on machine learning (Support Vector Regression)• Communication models are based on analytical models

0

500

1000

1500

2000

2500

3000

GoogleNet ResNet101 ResNet152 ResNet50 VGG16 VGG19

Batch Size 64

Actual Predicted

Page 20: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

Reference Designs

20

Page 21: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

Reference Designs

21

A reference hardware/software stack for a particular class of workloads• Image classification• Natural Language Processing• Speech Recognition• Video Analytics

Image Classification Reference Design:

Page 22: HPE Deep Learning Cookbook - NVIDIA · HPE’s portfolio for deep learning. 3. Compute ideal for training models in data center. Edge analytics and . inference engine. Compute for

Thank youNatalia Vassilieva Sergey [email protected] [email protected]

22

HPE Deep Learning Cookbookhttps://developer.hpe.com/platform/hpe-deep-learning-cookbook/home