69
Hao Chen (陈浩) Vision Lab Nanjing University [email protected] http://vision.nju.edu.cn Learning for Networking 2021/4/7 1

Learning for Networking - nju.edu.cn

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Hao Chen (陈浩)Vision Lab

Nanjing [email protected]

http://vision.nju.edu.cn

Learning for Networking

2021/4/7 1

Last Time

q Media System standardsq HTTP Live Streamingq Dynamic Adaptive Streaming over HTTPq MPEG Media Transportq Adaptive Streaming Algorithm

2021/4/7 2

This Time

q Last Time: Media System standardsq This Time: Learning for Networking

q Learning-based cachingq Learning-based routingq Learning-based bitrate adaptation

2021/4/7 3

Materials

q 李宏毅机器学习2017q https://www.bilibili.com/video/av10590361?from=search&seid=49306

02846193099298q https://datawhalechina.github.io/leeml-

notes/#/?id=%e6%9d%8e%e5%ae%8f%e6%af%85%e6%9c%ba%e5%99%a8%e5%ad%a6%e4%b9%a0%e7%ac%94%e8%ae%b0leeml-notes

q MIT Pensieve.q http://web.mit.edu/pensieve/

q Xavier Amatriain Lecture at Machine Learning Summer School 2014, Carnegie Mellon University.q https://cirocavani.wordpress.com/2014/08/06/recommender-

systems-machine-learning-summer-school-2014-cmu/

2021/4/7 4

Introduction to Learning Methods

2021/4/7 5

Why we need machine learning?

q Data-drivenq Big Data

q Excellent Performanceq CNN with CVq DRL with Robotq RNN with NLP

2021/4/7 6

How to Learn?

q Problem formulationq Predictionq Regressionq Clusteringq Decision making….

2021/4/7 7

Problem Formulation

q Classification

2021/4/7 8

Problem Formulation

q Regressionq In a regression problem,

we aim to predict the output of a continuous value, like a price or a probability.

q Contrast this with a classification problem, where we aim to predict a discrete label (for example, where a picture contains an apple or an orange).

2021/4/7 9

Problem Formulation

q Clustering

2021/4/7 10

Demo: https://www.naftaliharris.com/blog/visualizing-k-means-clustering/

Brief Introduction of NN

2021/4/7 11Ref: https://www.slideshare.net/tw_dsconf/ss-62245351

Brief Introduction of NN

q Different connections leads to different network structure

2021/4/7 12

Brief Introduction of NN

q Fully Connect Feedforward Network

2021/4/7 13

Brief Introduction of NN

2021/4/7 14

Brief Introduction of NN

2021/4/7 15

Brief Introduction of NN

2021/4/7 16

Brief Introduction of NN

q Learning is not only deep Fully Connected Network!

2021/4/7 17

Brief Introduction of NN

q Reinforcement Learning

2021/4/7 18

Learning-based Caching

2021/4/7 19

Learning-based Caching

q Basic knowledge about Cachingq Purpose

q Server Providerq User

q Policyq Admission Policyq Eviction Policy

q Methodsq Recency Basedq Frequency Basedq Size Basedq Hybrid

2021/4/7 20

Purpose

q For Server Providerq Traffic Consumption

q For Userq Access Timeq QoE(Quality of Experience)

2021/4/7 21

Policy

q What to admit?q Most of approaches ignore what to admit.q Size Awareq Frequency

q What to evict?q Recencyq Frequency

2021/4/7 22

Methods Review

q Three typical approachesq FIFO(First In First Out)q LRU(Least Recently Used)q LFU(Least frequently used)

2021/4/7 23

Methods Review

2021/4/7 24Ref: D. S. Berger, R. K. Sitaraman, and M. Harchol-Balter, “Adaptsize: Orchestrating the hot object memory cache in a content delivery network.” in NSDI, 2017, pp. 483–498.

Overview of DeepCache

2021/4/7 25

DeepCache

q Learning Method: Sequence to Sequenceq NLP(Natural Language Processing)

q Machine Translation

2021/4/7 26

DeepCache

q Predictor

2021/4/7 27

DeepCache

q Deployq How to deploy?

2021/4/7 28

Recommended system based Caching

q Caching on edge server is much more challenging than caching on core serverq Local distribution is highly fluctuating and unreliable because

of the lower degree of multiplexingq The temporal content request patterns are more sensitive to

individual users’ content consumption behaviorsq Edge servers have much less caching and computation

resources

2021/4/7 29

Recommender System

2021/4/7 30

The Recommender Problem

q Estimate a utility function that automatically predicts how a user will like an item.

q Based on:q Pastq Relations to other usersq Item similarityq Contextq ……

2021/4/7 31

Approaches to Recommendation

q Collaborative Filtering: Recommend items based only on the users past behaviorq User-based: Find similar users to me and recommend what

they likedq Item-based: Find similar items to those that I have previously

likedq Content-based: Recommend based on item featuresq Personalized Learning to Rank: Treat recommendation as a

ranking problemq Demographic: Recommend based on user featuresq Social recommendations (trust-based)q Hybrid: Combine any of the above

2021/4/7 32

Methods

2021/4/7 33

Recommendation Algorithm

q Matrix Factorization

2021/4/7 34

MF-Based Caching

2021/4/7 35

1. Calculate the group-contentrating matrix R (Sparse)

2. MF(R)

3. Reconstruct the rating matrix R(dense)

4. Update the cache list

Collaborative Filtering

2021/4/7 36

Example of Collaborative Filter

q Collaborative Filter Methodq KNN-based Dynamic Caching Strategyq KNN, K=1

q Similarityq Euclidean distanceq Pearson correlationq Cosine Similarityq Jaccard similarity

2021/4/7 37

CF-based Caching

2021/4/7 38

1. generating group similarity

2. Get the recent request matrix

3. Score the contents in cache

Summary

q DeepCache vs. Recommendation Algorithm (KnnDyn)q Similarities

q Given all the information obtained globally or locally in all the previous time slots, estimate the probability of request of certain content by a certain user/group in the next time slot.

q Determine the priority of each file.q Always evict the file with the lowest priority.

q Differencesq DeepCache focus on history of a single edge server.q Recommendation approaches focus on global information.

2021/4/7 39

Learning-based Routing

2021/4/7 40

Routing

q Routing Tableq The column Network Destination and Netmask together

describe the Network ID. For example, destination 192.168.0.0 and netmask 255.255.255.0 can be written as network ID 192.168.0.0/24

q The Gateway column contains the same information as the Next hop, i.e. it points to the gateway through which the network can be reached.

2021/4/7 41

Routing

q Routing Tableq The Interface indicates what locally available interface is

responsible for reaching the gateway. In this example, gateway 192.168.0.1 (the internet router) can be reached through the local network card with address 192.168.0.100.

q Finally, the Metric indicates the associated cost of using the indicated route.

2021/4/7 42

Methods of Routing

q Fixed Routingq A route is selected for each source-destination pair of nodes in

the networkq No difference between routing for datagrams and virtual

circuitsq Simplicity, but lack of flexibilityq Refinement: supply the nodes with an alternate next node for

each destination

2021/4/7 43

Methods of Routing

q Floodingq A packet is sent by a source node to every one of its

neighborsq At each node, an incoming packet is retx on all outgoing link

except for the link on which it arrivedq Hop count field deals with duplicate copies of a pktq Properties

q All possible routes btw source and destination are triedq At least one copy of the packet to arrive at the destination will

have a minimum-hop routeq All nodes connected to the source node are visited

2021/4/7 44

Methods of Routing

q Random Routingq Selects only one outgoing path for retx of an incoming packetq Assign a probability to each outgoing link and to select the link

based on that probabilityq Adaptive Routing

q Routing decisions that are made change as conditions on the network changeq Failureq Congestion

2021/4/7 45

Methods of Routing

q Adaptive Routingq State of the network must be exchanged among the nodesq Routing decision is more complexq Introduces traffic of state information to the networkq Reacting too quickly will cause congestion producing

oscillationq If it reacts too slowly, the strategy will be irrelevant

2021/4/7 46

Methods of Routing

q Dijkstra’s Algorithm

2021/4/7 47

Methods of Routing

q Reduced Graph

2021/4/7 48

Methods of Routing

q Dijkstra’s Algorithm

2021/4/7 49

Methods of Routing

q Dijkstra’s Algorithm

2021/4/7 50

Learning Based Routing Structure

q Inputs: Network Metricsq Delayq Bandwidth/Throughputq In-stack packets

q Modelsq FCNq LSTM\GRUq DRL

q Outputsq Whole Pathq Next Hop

2021/4/7 51

Methods of Routing

q Learning-based routing system

2021/4/7 52

Learning-Based Routing Workflow

q Initial PhaseObtain the relevant data for training1.OSPF simulate2. from available datasets

q Training PhaseGreedy layer-wise trainingBackpropagation algorithm to fine tune

q Running Phase

2021/4/7 53

Simple example

2021/4/7 54

Learning-based Rate Adaptation

2021/4/7 55

https://gigaom.com/2012/11/09/online-viewers-start-leaving-if-video-doesnt-play-in-2-seconds-says-study/Video: La Luna (Pixar 2011)

Users start leaving if video doesn’t play in 2 seconds

562021/4/7

Networked Video

Video Client

Video Server

Request: next video chunk at bitrate r

Response: video content

InputOutput

1 sec/sec

0

1

2

3

0 5 0 1 00 1 50 2 00 2 50 3 00

Thro

ughp

ut

Time

Animation borrowed from Te-Yuan Huang (SIGCOMM ‘14) http://conferences.sigcomm.org/sigcomm/2014/doc/slides/38.pdf 57

bitrateAdaptive Bitrate

(ABR) Algorithms

1 sec video content

bitrate

Dynamic Streaming over HTTP (DASH)

2021/4/7

q Rate-based: pick bitrate based on predicted throughputq FESTIVE [CoNEXT’12], PANDA [JSAC’14], CS2P

[SIGCOMM’16]

q Buffer-based: pick bitrate based on buffer occupancy q BBA [SIGCOMM’14], BOLA [INFOCOM’16]

q Hybrid: use both throughput prediction & buffer occupancyq PBA [HotMobile’15], MPC [SIGCOMM’15]

Simplified inaccurate model leads to suboptimal performance

58

Previous Fixed ABR Algorithms

2021/4/7

Throughput

Video bitrate

t + Tmaximize QoE (t, t + T)subject to system dynamics

t

Problem: Needs accurate throughput model

Conservative Throughput Prediction

59

Thr

ough

put

Bitr

ate

(Mbp

s)B

uffe

r si

ze(s

ec)

Solution: learn from video streaming sessions

in actual network conditions

2021/4/7

Example: Model Predictive Control

Goal: maximize the cumulative reward

Agent Environment

Observe state

Take action

Reward

60

Reinforcement Learning

2021/4/7

Action

State

xt xt-1

n1 n2 nm

bt

ct

lt

Past chunk throughput

Next chunk sizes

Current buffer size

Remaining chunks

Last chunk bit rate

State st

τt τt-1

xt-k+1

τt-k+1

Past chunk download time

btPast chunk bitrate

st

ct

Environment

+ 𝑞 𝑏! − 𝜇𝑇! − 𝜆 𝑞 𝑏! − 𝑞 𝑏!"#

Reward rt

+ (bitrate) - (rebuffering) - (smoothness)

720P

240P

360P

720P

1080P

Action

a t

Reward

1D-CNN

1D-CNN

1D-CNN

1080P

720P

360P

240P

Agent

Pensieve Design

2021/4/7

62

ABR agent

state

Neural Network

240P480P720P1080P

policy πθ(s, a)

Take action anext bitrate

Observe state s

parameter θ

estimate from empirical data

Training:

Collect experience data: trajectory of [state, action, reward]

How to Train the ABR Agent

2021/4/7

63

q Learn the dynamics directly from experience

q Optimize the high level QoE objective end-to-end

q Extract control rules from raw high-dimensionalsignals

What Pensieve is good at

2021/4/7

{state, action, reward}experiences

updated neural network parameters

64

Video playbackFast chunk-level simulator

Pensieve worker

Pensieve worker

Pensieve worker

Pensieve master

Model update TensorFlow

Large corpus of network traces

cellular, broadband, synthetic

Pensieve Training System

2021/4/7

65

Pens

ieve

MPC Rebuffering

chances of outage

Pens

ieve

bu

ffer (

sec)

MPC

bu

ffer (

sec)

Thro

ughp

ut (

mbp

s)

Demo

2021/4/7

1. Build a fast experimentation/simulation platform

2. Data diversity is more important than “accuracy”

3. Think carefully about controller state space (observation signals)

• Too large a state space ⟶ slow & difficult learning• Too small a state space ⟶ loss of information• ⟶ When in doubt, include rather than cut the

signal

66

Pensieveagent

Coarse-grain chunk simulator

Lessons We Learned

2021/4/7

Reference

q D. S. Berger, R. K. Sitaraman, and M. Harchol-Balter, “Adaptsize: Orchestratingthe hot object memory cache in a content delivery network.”in NSDI, 2017, pp. 483–498.

q Narayanan A, Verma S, Ramadan E, et al. Deepcache: A deep learning based framework for content caching[C]//Proceedings of the 2018 Workshop on Network Meets AI & ML. ACM, 2018: 48-53.

q Li G, Shen Q, Liu Y, et al. Data-driven Approaches to Edge Caching[C]//Proceedings of the 2018 Workshop on Networking for Emerging Applications and Technologies. ACM, 2018: 8-14.

2021/4/7 67

Reference

q Mao B, Fadlullah Z M, Tang F, et al. Routing or computing? the paradigm shift towards intelligent computer network packet transmission based on deep learning[J]. IEEE Transactions on Computers, 2017, 66(11): 1946-1960.

q Mao H, Netravali R, Alizadeh M. Neural adaptive video streaming with pensieve[C]//Proceedings of the Conference of the ACM Special Interest Group on Data Communication. ACM, 2017: 197-210.

2021/4/7 68

2021/4/7 69

Thank you!Q & A!