High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 - LA Big Data Meetup -...

Preview:

Citation preview

HIGH PERFORMANCE DISTRIBUTED TENSORFLOWIN PRODUCTION WITH GPUS (AND KUBERNETES)

NIPS CONFERENCELOS ANGELES BIG DATA MEETUPSO-CAL PYDATA MEETUPDECEMBER 2017

CHRIS FREGLYFOUNDER @ PIPELINE.AI

INTRODUCTIONS: ME§ Chris Fregly, Founder @ PipelineAI

§ Formerly Netflix, Databricks, IBM Spark Tech

§ Advanced Spark and TensorFlow MeetupPlease Join Our 60,000+ Global Members!!

Contact Mechris@pipeline.ai

@cfregly

Global Locations* San Francisco* Chicago* Austin* Washington DC* Dusseldorf* London

INTRODUCTIONS: YOU§ Software Engineer, Data Scientist, Data Engineer, Data Analyst

§ Interested in Optimizing and Deploying TF Models to Production

§ Nice to Have a Working Knowledge of TensorFlow (Not Required)

AGENDA

Part 0: Introductions and Setup

Part 1: Optimize TensorFlow Training

Part 2: Optimize TensorFlow Serving

PIPELINE.AI IS 100% OPEN SOURCE

§ https://github.com/PipelineAI/pipeline/

§ Please Star 🌟 this GitHub Repo!

§ All slides, code, notebooks, and Docker images here:https://github.com/PipelineAI/pipeline/tree/master/gpu.ml

PIPELINE.AI OVERVIEW450,000 Docker Downloads60,000 Users Registered for GA60,000 Meetup Members50,000 LinkedIn Followers2,200 GitHub Stars12 Enterprise Beta Users

WHY HEAVY FOCUS ON MODEL SERVING?Model Training

Batch & Boring

Offline in Research Lab

Pipeline Ends at Training

No Insight into Live Production

Small Number of Data Scientists

Optimizations Very Well-Known

Real-Time & Exciting!!

Online in Live Production

Pipeline Extends into Production

Continuous Insight into Live Production

Huuuuuuge Number of Application Users

**Many Optimizations Not Yet Utilized

<<<

Model Serving

100’s Training Jobs per Day 1,000,000’s Predictions per Sec

COMPARE MODELS OFFLINE & ONLINE§ Offline, Batch Metrics

§ Validation + Training Accuracy§ CPU + GPU Utilization

§ Live Prediction Values§ Compare Relative Precision§ Newly-Seen, Streaming Data

§ Online, Real-Time Metrics§ Response Time, Throughput§ Cost ($) Per Prediction

EVERYBODY GETS A GPU!

SETUP ENVIRONMENT

§ Step 1: Browse to the following:http://allocator.community.pipeline.ai/allocate

§ Step 2: Browse to the following:http://<ip-address>

§ Step 3: Browse around. I will provide a Jupyter Username/Password soon.

Need Help? Use the Chat!

VERIFY SETUP

http://<ip-address>

Any username,Any password!

HANDS-ON EXERCISES§ Combo of Jupyter Notebooks and Command Line§ Command Line through Jupyter Terminal

§ Some Exercises Based on Experimental Features

You May See Errors. Stay Calm. It’s OK!!

LET’S EXPLORE OUR ENVIRONMENT§ Navigate to the following notebook:

01_Explore_Environment

§ https://github.com/PipelineAI/pipeline/tree/master/gpu.ml/notebooks

PULSE CHECK

BREAK Need Help? Use the Chat!

§ Please Star 🌟 this GitHub Repo!

§ All slides, code, notebooks, and Docker images here:https://github.com/PipelineAI/pipeline/tree/master/gpu.ml

AGENDA

Part 1: Optimize TensorFlow Training

§ GPUs and TensorFlow§ Feed, Train, and Debug TensorFlow Models§ TensorFlow Distributed Cluster Model Training§ Optimize Training with JIT XLA Compiler

SETTING UP TENSORFLOW WITH GPUS

§ Very Painful!

§ Especially inside Docker§ Use nvidia-docker

§ Especially on Kubernetes!§ Use the Latest Kubernetes (with Init Script Support)

§ http://pipeline.ai for GitHub + DockerHub Links

TENSORFLOW + CUDA + NVIDIA GPU

GPU HALF-PRECISION SUPPORT§ FP32 is “Full Precision”, FP16 is “Half Precision”§ Two(2) FP16’s in Each FP32 GPU Core for 2x Throughput!§ Lower Precision is OK for Approx. Deep Learning Use Cases§ The Network Matters Most – Not Individual Neuron Accuracy§ Supported by Pascal P100 (2016) and Volta V100 (2017)

You Can Set TF_FP16_MATMUL_USE_FP32_COMPUTE=0

on GPU w/ Compute Capability(CC) 5.3+

VOLTA V100 (2017) VS. PASCAL P100 (2016)§ 84 Streaming Multiprocessors (SM’s)§ 5,376 GPU Cores§ 672 Tensor Cores (ie. Google TPU)

§ Mixed FP16/FP32 Precision§ Matrix Dims Should be Multiples of 8

§ More Shared Memory§ New L0 Instruction Cache§ Faster L1 Data Cache§ V100 vs. P100 Performance

§ 12x Training, 6x Inference

FP32 VS. FP16 ON AWS GPU INSTANCESFP16 Half Precision

87.2 T ops/second for p3 Volta V1004.1 T ops/second for g3 Tesla M601.6 T ops/second for p2 Tesla K80

FP32 Full Precision15.4 T ops/second for p3 Volta V1004.0 T ops/second for g3 Tesla M603.3 T ops/second for p2 Tesla K80

§ Currently Supports the Following:§ Tesla K80§ Pascal P100§ Volta V100 Coming Soon?§ TPUs (Only in Google Cloud)

§ Attach GPUs to CPU Instances§ Similar to AWS Elastic GPU, except less confusing

WHAT ABOUT GOOGLE CLOUD?

V100 AND CUDA 9§ Independent Thread Scheduling - Finally!!

§ Similar to CPU fine-grained thread synchronization semantics§ Allows GPU to yield execution of any thread

§ Still Optimized for SIMT (Same Instruction Multi-Thread)§ SIMT units automatically scheduled together

§ Explicit Synchronization

P100 V100

GPU CUDA PROGRAMMING

§ Barbaric, But Fun Barbaric

§ Must Know Hardware Very Well

§ Hardware Changes are Painful

§ Use the Profilers & Debuggers

CUDA STREAMS

§ Asynchronous I/O Transfer

§ Overlap Compute and I/O

§ Keep GPUs Saturated!

§ Used Heavily by TensorFlow

Bad

Good

Bad

Good

CUDA SHARED AND UNIFIED MEMORY

LET’S SEE WHAT THIS THING CAN DO!§ Navigate to the following notebook:

01a_Explore_GPU01b_Explore_Numba

§ https://github.com/PipelineAI/pipeline/tree/master/gpu.ml/notebooks

AGENDA

Part 1: Optimize TensorFlow Training

§ GPUs and TensorFlow§ Feed, Train, and Debug TensorFlow Models§ TensorFlow Distributed Cluster Model Training§ Optimize Training with JIT XLA Compiler

TRAINING TERMINOLOGY§ Tensors: N-Dimensional Arrays§ ie. Scalar, Vector, Matrix

§ Operations: MatMul, Add, SummaryLog,…§ Graph: Graph of Operations (DAG)§ Session: Contains Graph(s)§ Feeds: Feed Inputs into Placeholder§ Fetches: Fetch Output from Operation§ Variables: What We Learn Through Training§ aka “Weights”, “Parameters”

§ Devices: Hardware Device (GPU, CPU, TPU, ...)

-TensorFlow-Trains

Variables

-User-FetchesOutputs

-User-FeedsInputs

-TensorFlow-PerformsOperations

-TensorFlow-Flows

Tensors

with tf.device(“/cpu:0,/gpu:15”):

TENSORFLOW SESSION

Session

graph: GraphDef

Variables:“W” : 0.328“b” : -1.407

Variables are Randomly Initialized,

thenPeriodically

Checkpointed

GraphDef is Created During Training, then Frozen for Inference

TENSORFLOW GRAPH EXECUTION

§ Lazy Execution by Default§ Similar to Spark

§ Eager Execution Now Supported (TensorFlow 1.4+)§ Similar to PyTorch

§ "Linearize” Execution to Minimize RAM Usage§ Useful on Single GPU with Limited RAM

OPERATION PARALLELISM§ Inter-Op (Between-Op) Parallelism

§ By default, TensorFlow runs multiple ops in parallel§ Useful for low core and small memory/cache envs§ Set to one (1)

§ Intra-Op (Within-Op) Parallelism§ Different threads can use same set of data in RAM§ Useful for compute-bound workloads (CNNs) § Set to # of cores (>=2)

TENSORFLOW MODEL§ MetaGraph

§ Combines GraphDef and Metadata

§ GraphDef§ Architecture of your model (nodes, edges)

§ Metadata§ Asset: Accompanying assets to your model§ SignatureDef: Maps external to internal tensors

§ Variables§ Stored separately during training (checkpoint)§ Allows training to continue from any checkpoint§ Variables are “frozen” into Constants when preparing for inference

GraphDef

x

W

mul add

b

MetaGraphMetadata

AssetsSignatureDef

TagsVersion

Variables:“W” : 0.328“b” : -1.407

SAVED MODEL FORMAT§ Different Format than Traditional Exporter§ Contains Checkpoints, 1..* MetaGraph’s, and Assets§ Export Manually with SavedModelBuilder§ Estimator.export_savedmodel()§ Hooks to Generate SignatureDef§ Use saved_model_cli to Verify§ Used by TensorFlow Serving§ New Standard Export Format? (Catching on Slowly…)

BATCH NORMALIZATION (2015)§ Each Mini-Batch May Have Wildly Different Distributions§ Normalize per Batch (and Layer)§ Faster Training, Learns Quicker§ Final Model is More Accurate§ TensorFlow is already on 2nd Generation Batch Algorithm§ First-Class Support for Fusing Batch Norm Layers§ Final mean + variance Are Folded Into Graph Later

-- (Almost) Always Use Batch Normalization! --

z = tf.matmul(a_prev, W)a = tf.nn.relu(z)

a_mean, a_var = tf.nn.moments(a, [0])

scale = tf.Variable(tf.ones([depth/channels]))beta = tf.Variable(tf.zeros ([depth/channels]))

bn = tf.nn.batch_normalizaton(a, a_mean, a_var, beta, scale, 0.001)

DROPOUT (2014)§ Training Technique§ Prevents Overfitting§ Helps Avoid Local Minima§ Inherent Ensembling Technique

§ Creates and Combines Different Neural Architectures§ Expressed as Probability Percentage (ie. 50%)§ Boost Other Weights During Validation & Prediction

Perform Dropout (Training Phase)

Boost for Dropout (Validation & Prediction Phase)

0%Dropout

50% Dropout

EXTEND EXISTING DATA PIPELINES§ Data Processing

§ HDFS/Hadoop§ Spark

§ Containers§ Docker

§ Schedulers§ Kubernetes§ Mesos

<dependency> <groupId>org.tensorflow</groupId> <artifactId>tensorflow-hadoop</artifactId>

</dependency>

https://github.com/tensorflow/ecosystem

FEED TENSORFLOW TRAINING PIPELINE

§ Training is Limited by the Ingestion Pipeline

§ THE Number One Problem We See Today

§ Scaling GPUs Up / Out Doesn’t Help

§ GPUs are Heavily Under-UtilizedTesla K80 Volta V100

DON’T USE FEED_DICT!!§ feed_dict Requires Python <-> C++ Serialization§ Not Optimized for Production Ingestion Pipelines§ Retrieves Next Batch After Current Batch is Done§ Single-Threaded, Synchronous§ CPUs/GPUs Not Fully Utilized!§ Use Queue or Dataset APIs§ Queues are old & complex

sess.run(train_step, feed_dict={…}

DETECT UNDERUTILIZED CPUS, GPUS

§ Instrument training code to generate “timelines”

§ Analyze with Google Web Tracing Framework (WTF)

§ Monitor CPU with top, GPU with nvidia-smi

http://google.github.io/tracing-framework/

from tensorflow.python.client import timeline

trace = timeline.Timeline(step_stats=run_metadata.step_stats)

with open('timeline.json', 'w') as trace_file:trace_file.write(trace.generate_chrome_trace_format(show_memory=True))

QUEUES

§ More than traditional Queue§ Uses CUDA Streams§ Perform I/O, pre-processing, cropping, shuffling, …§ Pull from HDFS, S3, Google Storage, Kafka, ...§ Combine many small files into large TFRecord files§ Use CPUs to free GPUs for compute§ Helps saturate CPUs and GPUs

QUEUE CAPACITY PLANNING§ batch_size

§ # examples / batch (ie. 64 jpg)§ Limited by GPU RAM

§ num_processing_threads§ CPU threads pull and pre-process batches of data§ Limited by CPU Cores

§ queue_capacity§ Limited by CPU RAM (ie. 5 * batch_size)

DATASET APItf.Tensor => tf.data.Dataset

Functional Transformations

Python Generator => tf.data.Dataset

Dataset.from_tensors((features, labels))Dataset.from_tensor_slices((features, labels))TextLineDataset(filenames)

dataset.map(lambda x: tf.decode_jpeg(x))dataset.repeat(NUM_EPOCHS)dataset.batch(BATCH_SIZE)

def generator():while True:

yield ...dataset.from_generator(generator, tf.int32)

Dataset => One-Shot Iterator

Dataset => Initializable Iter

iter = dataset.make_one_shot_iterator()next_element = iter.get_next()while …:

sess.run(next_element)

iter = dataset.make_initializable_iterator()sess.run(iter.initializer, feed_dict=PARAMS)next_element = iter.get_next()while …:

sess.run(next_element)

TIP: Use Dataset.prefetch() and parallel version of Dataset.map()

FUTURE OF DATASET API§ Replace Queue

§ More Functional Operators

§ Automatic GPU Data Staging

§ Under-utilized GPUs Assisting with Data Ingestion

§ Advanced, RL-based Device Placement Strategies

LET’S FEED DATA WITH A QUEUE§ Navigate to the following notebook:

02_Feed_Queue_HDFS

§ https://github.com/PipelineAI/pipeline/tree/master/gpu.ml/notebooks

PULSE CHECK

BREAK Need Help? Use the Chat!

§ Please Star 🌟 this GitHub Repo!

§ All slides, code, notebooks, and Docker images here:https://github.com/PipelineAI/pipeline/tree/master/gpu.ml

LET’S TRAIN A MODEL (CPU)§ Navigate to the following notebook:

03_Train_Model_CPU

§ https://github.com/PipelineAI/pipeline/tree/master/gpu.ml/notebooks

LET’S TRAIN A MODEL (GPU)§ Navigate to the following notebook:

03a_Train_Model_GPU

§ https://github.com/PipelineAI/pipeline/tree/master/gpu.ml/notebooks

TENSORFLOW DEBUGGER§ Step through Operations§ Inspect Inputs and Outputs§ Wrap Session in Debug Sessionsess = tf.Session(config=config)sess =

tf_debug.LocalCLIDebugWrapperSession(sess)

LET’S DEBUG A MODEL§ Navigate to the following notebook:

04_Debug_Model

§ https://github.com/PipelineAI/pipeline/tree/master/gpu.ml/notebooks

AGENDA

Part 1: Optimize TensorFlow Training

§ GPUs and TensorFlow§ Train, Inspect, and Debug TensorFlow Models§ TensorFlow Distributed Cluster Model Training§ Optimize Training with JIT XLA Compiler

SINGLE NODE, MULTI-GPU TRAINING§ cpu:0

§ By default, all CPUs§ Requires extra config to target a CPU

§ gpu:0..n§ Each GPU has a unique id§ TF usually prefers a single GPU

§ xla_cpu:0, xla_gpu:0..n§ “JIT Compiler Device”§ Hints TensorFlow to attempt JIT Compile

with tf.device(“/cpu:0”):

with tf.device(“/gpu:0”):

with tf.device(“/gpu:1”):

GPU 0 GPU 1

DISTRIBUTED, MULTI-NODE TRAINING§ TensorFlow Automatically Inserts Send and Receive Ops into Graph§ Parameter Server Synchronously Aggregates Updates to Variables§ Nodes with Multiple GPUs will Pre-Aggregate Before Sending to PS

Worker0 Worker0

Worker1

Worker0 Worker1 Worker2

gpu0 gpu1

gpu2 gpu3

gpu0 gpu1

gpu2 gpu3

gpu0 gpu1

gpu2 gpu3

gpu0

gpu1

gpu0

gpu0

SingleNode

MultipleNodes

DATA PARALLEL VS. MODEL PARALLEL§ Data Parallel (“Between-Graph Replication”)

§ Send exact same model to each device§ Each device operates on partition of data

§ ie. Spark sends same function to many workers§ Each worker operates on their partition of data

§ Model Parallel (“In-Graph Replication”)§ Send different partition of model to each device§ Each device operates on all data§ Difficult, but required for larger models with lower-memory GPUs

SYNCHRONOUS VS. ASYNCHRONOUS§ Synchronous

§ Nodes compute gradients§ Nodes update Parameter Server (PS)§ Nodes sync on PS for latest gradients

§ Asynchronous§ Some nodes delay in computing gradients§ Nodes don’t update PS§ Nodes get stale gradients from PS§ May not converge due to stale reads!

CHIEF WORKER§ Chief Defaults to Worker Task 0

§ Task 0 is guaranteed to exist§ Performs Maintenance Tasks

§ Writes log summaries§ Instructs PS to checkpoint vars§ Performs PS health checks§ (Re-)Initialize variables at (re-)start of training

NODE AND PROCESS FAILURES§ Checkpoint to Persistent Storage (HDFS, S3)§ Use MonitoredTrainingSession and Hooks§ Use a Good Cluster Orchestrator (ie. Kubernetes, Mesos)§ Understand Failure Modes and Recovery States

Stateless, Not Bad: Training Continues Stateful, Bad: Training Must Stop Dios Mio! Long Night Ahead…

ESTIMATOR + EXPERIMENT API§ Supports Keras!§ Unified API for Local + Distributed

§ Provide Clear Path to Production§ Enable Rapid Model Experiments

§ Provide Flexible Parameter Tuning§ Enable Downstream Optimizing & Serving Infra( )§ Nudge Users to Best Practices Through Opinions

§ Provide Hooks/Callbacks to Override Opinions

ESTIMATOR API§ “Train-to-Serve” Design§ Create Custom Estimator or Re-Use Canned Estimator§ Hides Session, Graph, Layers, Iterative Loops (Train, Eval, Predict)§ Hooks for All Phases of Model Training and Evaluation

§ Load Input: input_fn()§ Train: model_fn() and train() § Evaluate: eval_fn() and evaluate()§ Performance Metrics: Loss, Accuracy, …§ Save and Export: export_savedmodel()§ Predict: predict() Uses the slow sess.run()

https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/customestimator/

EXPERIMENT API

§ Easier-to-Use Distributed TensorFlow§ Same API for Local and Distributed (*Theoretically)§ Combines Estimator with input_fn() § Used for Training, Evaluation, & Hyper-Parameter Tuning§ Distributed Training Defaults to Data-Parallel & Async§ Cluster Configuration is Fixed at Start of Training Job§ No Auto-Scaling Allowed, but That’s OK for Training

ESTIMATOR & EXPERIMENT CONFIGS§ TF_CONFIG

§ Special environment variable for config§ Defines ClusterSpec in JSON incl. master, workers, PS’s§ Distributed mode ‘{“environment”:“cloud”}’§ Local: ‘{environment”:“local”, {“task”:{”type”:”worker”}}’

§ RunConfig: Defines checkpoint interval, output directory, § HParams: Hyper-parameter tuning parameters and ranges§ learn_runner creates RunConfig before calling run() & tune()§ schedule is set based on {”task”:{”type”:…}}

TF_CONFIG='{"environment": "cloud", "cluster":{"master":["worker0:2222”],"worker":["worker1:2222"],"ps": ["ps0:2222"]}, "task": {"type": "ps",

"index": "0"}}'

ESTIMATOR + KERAS§ Distributed TensorFlow (Estimator) + Easy to Use (Keras)§ tf.keras.estimator.model_to_estimator()

# Instantiate a Keras inception v3 model.keras_inception_v3 = tf.keras.applications.inception_v3.InceptionV3(weights=None)# Compile model with the optimizer, loss, and metrics you'd like to train with.keras_inception_v3.compile(optimizer=tf.keras.optimizers.SGD(lr=0.0001, momentum=0.9),

loss='categorical_crossentropy',metric='accuracy')

# Create an Estimator from the compiled Keras model.est_inception_v3 = tf.keras.estimator.model_to_estimator(keras_model=keras_inception_v3)# Treat the derived Estimator as you would any other Estimator. For example,# the following derived Estimator calls the train method:est_inception_v3.train(input_fn=my_training_set, steps=2000)

“CANNED” ESTIMATORS

§ Commonly-Used Estimators§ Pre-Tested and Pre-Tuned§ DNNClassifer, TensorForestEstimator§ Always Use Canned Estimators If Possible§ Reduce Lines of Code, Complexity, and Bugs§ Use FeatureColumn to Define & Create Features

Custom vs. Canned

@ Google, August 2017

ESTIMATOR + DATASETdef input_fn():

def generator():while True:yield ...

my_dataset = tf.data.dataset.from_generator(generator, tf.int32)

# A one-shot iterator automatically initializes itself on first use.iter = my_dataset.make_one_shot_iterator()

# The return value of get_next() matches the dataset element type.images, labels = iter.get_next()

return images, labels

# The input_fn can be used as a regular Estimator input function.estimator = tf.estimator.Estimator(…)

estimator.train(train_input_fn=input_fn, …)

OPTIMIZER, ESTIMATOR API + TPU’Srun_config = tpu_config.RunConfig()

estimator = tpu_estimator.TpuEstimator(model_fn=model_fn,config=run_config)

estimator.train(input_fn=input_fn,num_epochs=10,…)

optimizer = tpu_optimizer.CrossShardOptimizer(tf.train.GradientDescentOptimizer(learning_rate=…)

)

train_op = optimizer.minimize(loss)

estimator_spec = tf.estimator.EstimatorSpec(train_op=train_op,loss=…)

MULTIPLE HEADS (OBJECTIVES)§ Single-Objective Estimator

§ Single classification prediction§ Multi-Objective Estimator

§ One (1) classification prediction § One(1) final layer to feed into next model

§ Multiple Heads Used to Ensemble Models§ Treats neural network as a feature engineering step§ Supported by TensorFlow Serving

LAYERS API§ Standalone Layer or Entire Sub-Graphs§ Functions of Tensor Inputs & Outputs§ Mix and Match with Operations§ Assumes 1st Dimension is Batch Size § Handles One (1) to Many (*) Inputs§ Metrics are Layers

§ Loss Metric (Per Mini-Batch)§ Accuracy and MSE (Across Mini-Batches)

FEATURE_COLUMN API§ Used by Canned Estimator§ Declaratively Specify Training Inputs§ Converts Sparse to Dense Tensors

§ Sparse Features: Query Keyword, ProductID§ Dense Features: One-Hot, Multi-Hot

§ Wide/Linear: Use Feature-Crossing§ Deep: Use Embeddings

FEATURE CROSSING

§ Create New Features by Combining Existing Features§ Limitation: Combinations Must Exist in Training Dataset

base_columns = [education, marital_status, relationship, workclass, occupation, age_buckets

]

crossed_columns = [tf.feature_column.crossed_column(

['education', 'occupation'], hash_bucket_size=1000),tf.feature_column.crossed_column(

['age_buckets', 'education', 'occupation'], hash_bucket_size=1000)]

FEATURE_COLUMN EXAMPLES

§ Continuous + One-Hot + Embeddingdeep_columns = [

age,education_num,capital_gain,capital_loss,hours_per_week,tf.feature_column.indicator_column(workclass),tf.feature_column.indicator_column(education),tf.feature_column.indicator_column(marital_status),tf.feature_column.indicator_column(relationship),# To show an example of embeddingtf.feature_column.embedding_column(occupation, dimension=8),

]

SEPARATE TRAINING + EVALUATION

§ Separate Training and Evaluation Clusters

§ Evaluate Upon Checkpoint

§ Avoid Resource Contention

§ Training Continues in Parallel with Evaluation

TrainingCluster

EvaluationCluster

Parameter ServerCluster

LET’S TRAIN DISTRIBUTED TENSORFLOW§ Navigate to the following notebook:

05_Train_Model_Distributed_CPUor 05a_Train_Model_Distributed_GPU

§ https://github.com/PipelineAI/pipeline/tree/master/gpu.ml/notebooks

PULSE CHECK

BREAK Need Help? Use the Chat!

§ Please Star 🌟 this GitHub Repo!

§ All slides, code, notebooks, and Docker images here:https://github.com/PipelineAI/pipeline/tree/master/gpu.ml

AGENDA

Part 1: Optimize TensorFlow Training

§ GPUs and TensorFlow§ Train, Inspect, and Debug TensorFlow Models§ TensorFlow Distributed Cluster Model Training§ Optimize Training with JIT XLA Compiler

XLA FRAMEWORK

§ XLA: “Accelerated Linear Algebra”§ Reduce Reliance on Custom Operators§ Improve Execution Speed§ Improve Memory Usage§ Reduce Mobile Footprint§ Improve Portability

Helps TensorFlow Stay Flexible, Yet Still Performant

XLA HIGH LEVEL OPTIMIZER (HLO)

§ HLO: “High Level Optimizer”§ Compiler Intermediate Representation (IR)§ Independent of source and target language§ XLA Step 1 Emits Target-Independent HLO § XLA Step 2 Emits Target-Dependent LLVM§ LLVM Emits Native Code Specific to Target § Supports x86-64, ARM64 (CPU), and NVPTX (GPU)

JIT COMPILER§ JIT: “Just-In-Time” Compiler§ Built on XLA Framework§ Reduce Memory Movement – Especially with GPUs§ Reduce Overhead of Multiple Function Calls § Similar to Spark Operator Fusing in Spark 2.0§ Unroll Loops, Fuse Operators, Fold Constants, …§ Scopes: session, device, with jit_scope():

VISUALIZING JIT COMPILER IN ACTION

Before JIT After JIT

Google Web Tracing Framework:http://google.github.io/tracing-framework/

from tensorflow.python.client import timelinetrace = timeline.Timeline(step_stats=run_metadata.step_stats)with open('timeline.json', 'w') as trace_file:trace_file.write(

trace.generate_chrome_trace_format(show_memory=True))

run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)run_metadata = tf.RunMetadata()sess.run(options=run_options,

run_metadata=run_metadata)

VISUALIZING FUSING OPERATORS

pip install graphviz

dot -Tpng \/tmp/hlo_graph_1.w5LcGs.dot \-o hlo_graph_1.png

GraphViz:http://www.graphviz.org

hlo_*.dot files generated by XLA

LET’S TRAIN WITH XLA CPU§ Navigate to the following notebook:

06_Train_Model_XLA_CPU

§ https://github.com/PipelineAI/pipeline/tree/master/gpu.ml/notebooks

LET’S TRAIN WITH XLA GPU§ Navigate to the following notebook:

06a_Train_Model_XLA_GPU

§ https://github.com/PipelineAI/pipeline/tree/master/gpu.ml/notebooks

AGENDA

Part 0: Introductions and Setup

Part 1: Optimize TensorFlow Training

Part 2: Optimize TensorFlow Serving

WE ARE NOW…

…OPTIMIZING Models AFTER Model Training

TO IMPROVE Model Serving

AGENDA

Part 2: Optimize TensorFlow Serving

§ AOT XLA Compiler and Graph Transform Tool§ Key Components of TensorFlow Serving§ Deploy Optimized TensorFlow Model§ Optimize TensorFlow Serving Runtime

AOT COMPILER§ Standalone, Ahead-Of-Time (AOT) Compiler§ Built on XLA framework§ tfcompile§ Creates executable with minimal TensorFlow Runtime needed

§ Includes only dependencies needed by subgraph computation§ Creates functions with feeds (inputs) and fetches (outputs)

§ Packaged as cc_libary header and object files to link into your app§ Commonly used for mobile device inference graph

§ Currently, only CPU x86-64 and ARM are supported - no GPU

GRAPH TRANSFORM TOOL (GTT)

§ Post-Training Optimization to Prepare for Inference§ Remove Training-only Ops (checkpoint, drop out, logs)§ Remove Unreachable Nodes between Given feed -> fetch§ Fuse Adjacent Operators to Improve Memory Bandwidth§ Fold Final Batch Norm mean and variance into Variables§ Round Weights/Variables to improve compression (ie. 70%)§ Quantize (FP32 -> INT8) to Speed Up Math Operations

AFTER TRAINING, BEFORE OPTIMIZATION

-TensorFlow-Trains

Variables

-User-FetchesOutputs

-User-FeedsInputs

-TensorFlow-PerformsOperations

-TensorFlow-Flows

Tensors?!

POST-TRAINING GRAPH TRANSFORMStransform_graph \

--in_graph=unoptimized_cpu_graph.pb \ ß Original Graph--out_graph=optimized_cpu_graph.pb \ ß Transformed Graph--inputs=’x_observed:0' \ ß Feed (Input)--outputs=’Add:0' \ ß Fetch (Output) --transforms=' ß List of Transforms

strip_unused_nodesremove_nodes(op=Identity, op=CheckNumerics) fold_constants(ignore_errors=true) fold_batch_normsfold_old_batch_normsquantize_weightsquantize_nodes'

AFTER STRIPPING UNUSED NODES

§ Optimizations§ strip_unused_nodes

§ Results§ Graph much simpler§ File size much smaller

AFTER REMOVING UNUSED NODES

§ Optimizations§ strip_unused_nodes§ remove_nodes

§ Results§ Pesky nodes removed§ File size a bit smaller

AFTER FOLDING CONSTANTS

§ Optimizations§ strip_unused_nodes§ remove_nodes§ fold_constants

§ Results§ Placeholders (feeds) -> Variables*

(*Why Variables and not Constants?)

AFTER FOLDING BATCH NORMS

§ Optimizations§ strip_unused_nodes§ remove_nodes§ fold_constants§ fold_batch_norms

§ Results§ Graph remains the same§ File size approximately the same

AFTER QUANTIZING WEIGHTS

§ Optimizations§ strip_unused_nodes§ remove_nodes§ fold_constants§ fold_batch_norms§ quantize_weights

§ Results§ Graph is same, file size is smaller, compute is faster

WEIGHT QUANTIZATION

§ FP16 and INT8 Are Smaller and Computationally Simpler§ Weights/Variables are Constants§ Easy to Linearly Quantize

LET’S OPTIMIZE FOR INFERENCE§ Navigate to the following notebook:

07_Optimize_Model**Why just CPU version? Why not GPU?

§ https://github.com/PipelineAI/pipeline/tree/master/gpu.ml/notebooks

BUT WAIT, THERE’S MORE!

ACTIVATION QUANTIZATION§ Activations Not Known Ahead of Time

§ Depends on input, not easy to quantize§ Requires Additional Calibration Step

§ Use a “representative” dataset§ Per Neural Network Layer…

§ Collect histogram of activation values§ Generate many quantized distributions with different saturation thresholds§ Choose threshold to minimize…

KL_divergence(ref_distribution, quant_distribution)

§ Not Much Time or Data is Required (Minutes on Commodity Hardware)

AFTER ACTIVATION QUANTIZATION§ Optimizations

§ strip_unused_nodes§ remove_nodes§ fold_constants§ fold_batch_norms§ quantize_weights§ quantize_nodes (activations)

§ Results§ Larger graph, needs calibration! Requires Additional

freeze_requantization_ranges

LET’S OPTIMIZE FOR INFERENCE§ Navigate to the following notebook:

08_Optimize_Model_Activations

§ https://github.com/PipelineAI/pipeline/tree/master/gpu.ml/notebooks

FREEZING MODEL FOR DEPLOYMENT§ Optimizations

§ strip_unused_nodes

§ remove_nodes

§ fold_constants

§ fold_batch_norms

§ quantize_weights§ quantize_nodes

§ freeze_graph

§ Results§ Variables -> Constants

Finally!We’re Ready to Deploy!!

AGENDA

Part 2: Optimize TensorFlow Serving

§ AOT XLA Compiler and Graph Transform Tool§ Key Components of TensorFlow Serving§ Deploy Optimized TensorFlow Model§ Optimize TensorFlow Serving Runtime

MODEL SERVING TERMINOLOGY§ Inference

§ Only Forward Propagation through Network§ Predict, Classify, Regress, …

§ Bundle§ GraphDef, Variables, Metadata, …

§ Assets§ ie. Map of ClassificationID -> String§ {9283: “penguin”, 9284: “bridge”}

§ Version§ Every Model Has a Version Number (Integer)

§ Version Policy§ ie. Serve Only Latest (Highest), Serve Both Latest and Previous, …

TENSORFLOW SERVING FEATURES§ Supports Auto-Scaling§ Custom Loaders beyond File-based§ Tune for Low-latency or High-throughput§ Serve Diff Models/Versions in Same Process§ Customize Models Types beyond HashMap and TensorFlow§ Customize Version Policies for A/B and Bandit Tests§ Support Request Draining for Graceful Model Updates§ Enable Request Batching for Diff Use Cases and HW§ Supports Optimized Transport with GRPC and Protocol Buffers

PREDICTION SERVICE§ Predict (Original, Generic)

§ Input: List of Tensor§ Output: List of Tensor

§ Classify§ Input: List of tf.Example (key, value) pairs§ Output: List of (class_label: String, score: float)

§ Regress§ Input: List of tf.Example (key, value) pairs§ Output: List of (label: String, score: float)

PREDICTION INPUTS + OUTPUTS§ SignatureDef

§ Defines inputs and outputs§ Maps external (logical) to internal (physical) tensor names§ Allows internal (physical) tensor names to change

from tensorflow.python.saved_model import utilsfrom tensorflow.python.saved_model import signature_constantsfrom tensorflow.python.saved_model import signature_def_utils

graph = tf.get_default_graph()x_observed = graph.get_tensor_by_name('x_observed:0') y_pred = graph.get_tensor_by_name('add:0') inputs_map = {'inputs': x_observed} outputs_map = {'outputs': y_pred} predict_signature = signature_def_utils.predict_signature_def(inputs=inputs_map,

outputs=outputs_map)

MULTI-HEADED INFERENCE

§ Inputs Pass Through Model One Time§ Model Returns Multiple Predictions:

1. Human-readable prediction (ie. “penguin”, “church”,…)2. Final layer of scores (float vector)

§ Final Layer of floats Pass to the Next Model in Ensemble§ Optimizes Bandwidth, CPU/GPU, Latency, Memory§ Enables Complex Model Composing and Ensembling

BUILD YOUR OWN MODEL SERVER§ Adapt GRPC(Google) <-> HTTP (REST of the World)§ Perform Batch Inference vs. Request/Response§ Handle Requests Asynchronously§ Support Mobile, Embedded Inference§ Customize Request Batching§ Add Circuit Breakers, Fallbacks§ Control Latency Requirements§ Reduce Number of Moving Parts

#include “tensorflow_serving/model_servers/server_core.h”

class MyTensorFlowModelServer {ServerCore::Options options;

// set options (model name, path, etc)std::unique_ptr<ServerCore> core;

TF_CHECK_OK(ServerCore::Create(std::move(options), &core)

);}

Compile and Link withlibtensorflow.so

RUNTIME OPTION: NVIDIA TENSOR-RT§ Post-Training Model Optimizations

§ Specific to Nvidia GPU§ Similar to TF Graph Transform Tool

§ GPU-Optimized Prediction Runtime§ Alternative to TensorFlow Serving

§ PipelineAI Supports TensorRT!

AGENDA

Part 2: Optimize TensorFlow Serving

§ AOT XLA Compiler and Graph Transform Tool§ Key Components of TensorFlow Serving§ Deploy Optimized TensorFlow Model§ Optimize TensorFlow Serving Runtime

SAVED MODEL FORMAT§ Navigate to the following notebook:

09_Deploy_Optimized_Model

§ https://github.com/PipelineAI/pipeline/tree/master/gpu.ml/notebooks

AGENDA

Part 2: Optimize TensorFlow Serving

§ AOT XLA Compiler and Graph Transform Tool§ Key Components of TensorFlow Serving§ Deploy Optimized TensorFlow Model§ Optimize TensorFlow Serving Runtime

REQUEST BATCH TUNING§ max_batch_size

§ Enables throughput/latency tradeoff§ Bounded by RAM

§ batch_timeout_micros

§ Defines batch time window, latency upper-bound§ Bounded by RAM

§ num_batch_threads

§ Defines parallelism§ Bounded by CPU cores

§ max_enqueued_batches

§ Defines queue upper bound, throttling§ Bounded by RAM

Reaching either thresholdwill trigger a batch

ADVANCED BATCHING & SERVING TIPS§ Batch Just the GPU/TPU Portions of the Computation Graph

§ Batch Arbitrary Sub-Graphs using Batch / Unbatch Graph Ops§ Distribute Large Models Into Shards Across TensorFlow Model Servers§ Batch RNNs Used for Sequential and Time-Series Data§ Find Best Batching Strategy For Your Data Through Experimentation

§ BasicBatchScheduler: Homogeneous requests (ie Regress or Classify)§ SharedBatchScheduler: Mixed requests, multi-step, ensemble predict§ StreamingBatchScheduler: Mixed CPU/GPU/IO-bound Workloads

§ Serve Only One (1) Model Inside One (1) TensorFlow Serving Process§ Much Easier to Debug, Tune, Scale, and Manage Models in Production.

LET’S DEPLOY OPTIMIZED MODEL§ Navigate to the following notebook:

10_Optimize_Model_Server

§ https://github.com/PipelineAI/pipeline/tree/master/gpu.ml/notebooks

AGENDA

Part 0: Introductions and Setup

Part 1: Optimize TensorFlow Training

Part 2: Optimize TensorFlow Serving

THANK YOU! QUESTIONS?

§ https://github.com/PipelineAI/pipeline/

§ Please Star 🌟 this GitHub Repo!

§ All slides, code, notebooks, and Docker images here:https://github.com/PipelineAI/pipeline/tree/master/gpu.ml

Contact Mechris@pipeline.ai

@cfregly