30
Jon Turner Applied Research Lab Computer Science & Engineering Washington University www.arl.wustl.edu Supercharged Planetlab Platform GENI Experimenters’ Workshop – 6/2010

Jon Turner Applied Research Lab Computer Science & Engineering Washington University Supercharged Planetlab Platform GENI Experimenters’

  • View
    217

  • Download
    2

Embed Size (px)

Citation preview

Jon Turner

Applied Research LabComputer Science & EngineeringWashington University

www.arl.wustl.edu

Supercharged Planetlab PlatformGENI Experimenters’ Workshop – 6/2010

2

Shameless Plug

This talk is just a teaserTo really learn how to use the SPPs, come to

our tutorial at GEC-8 (Thursday, 7/22)» more detailed presentations» live demonstrations» supervised hands-on session

If you can’t do that» read the online tutorial at wiki.arl.wustl.edu/spp» get an account at spp_plc.arl.wustl.edu» start playing

3

SPP Nodes

SPP is a high performance overlay hosting platform Designed to be largely compatible with PlanetLab How it’s different from PlanetLab

» multiple processors, including an NP-blade for slice fastpaths» multiple 1 GbE interfaces» support for advance reservation of interface bw, NP resources

Chassis Switch

10x1 GbE

CP

ExternalSwitch

netFPGA

GPE GPE NPE

Line Card

4

SPP Deployment in Internet 2

2 2

2

two more nodesHouston and Atlanta

later this year

5

Washington DC Installation

6

Details

I2 Optical Connections

ProtoGENI ProtoGENI ProtoGENI

I2 Internet Service

I2 Router I2 RouterI2 Router

SPP SPP SPPSPP_PLCSALT KANS WASH

spp_plc.arl.wustl.edu

7

Hosting Platform Details

Chassis Switch

10x1 GbE

CP

ExternalSwitch

netFPGA

GPE GPE NPE

Line Card

PLOS

VM

VM . . .

General PurposeProcessing Engine

filter

. . .

. . .

filter

Line Card

lookupparse headerformat

... ...

queues

...

Network Processing Engine

8

Key Control Software Components

System Resource Manager (SRM)» runs on Control Processor» retrieves slice definitions from SPP-PLC» manages all system-level resources and reservations

Resource Management Proxy RMP)» runs on GPEs (in root vServer)» provides API for user slices to access resources

Slice Configuration tool (scfg)» command-line interface to RMP

Substrate Control Daemon (SCD)» runs on NPE management processor» supports NPE resource configuration, statistics reporting

9

Application Framework Fastpath/slowpath

» fastpath mapped onto NPE» control daemon in vServer

on GPE Configurable elements

» code option – determines how packets processed by parse, header format

» fastpath interfaces• map to physical interface• provisioned bandwidth

» TCAM filters» Queues

• length, bandwidth

Control daemon can configure fastpath through RMP»or users can configure manually with scfg

Parse Lookup

Filters

Control Interface

HeaderFormat

QueueManagerFast Path

...

...

...

outp

ut

inte

rface

s

input

inte

rface

s

controldaemon

(in vServer)

GPE

Remote Login Interface

exce

pti

on p

ack

ets

& in-b

and c

ontr

ol

exce

pti

on p

ack

ets

& in-b

and c

ontr

ol

10

Working with SPPs

Define new slice using SPP-PLC» just like PlanetLab

Login to SPPs in slice to install application codeReserve resources needed for your experiment

» includes interface bandwidth on external ports and NPE fastpath resources

To run experiment during reserved period» “claim” reserved resources» setup slowpath endpoints» configure fastpath (if applicable)» setup real-time monitoring» run application and start traffic generators

11

Web Site and SPP_PLChttp://wiki.arl.wustl.edu/index.php /Internet_Scale_Overlay_Hostinghttp://wiki.arl.wustl.edu/index.php /The_SPP_Tutorial

spp_plc.arl.wustl.edu

12

Creating a SliceSPP_PLC

Chassis Switch

CP

ExternalSwitch

netFPGA

GPE GPE NPE

Line Card

SRM

13

Preparing a Slice

Chassis Switch

CP

ExternalSwitch

netFPGA

GPE GPE NPE

Line Card

SLM

Logging into slice

SFTP connection for downloading code

Requires Network Address Translation

datapath detects new connection, LC

control processor adds filters

14

Chassis Switch

CP

ExternalSwitch

netFPGA

GPE GPE NPE

LC

SRM

RMP

Configuring a Slowpath Endpoint

●Request endpoint with desired port number on specific interface thru Resource Manager Proxy (RMP)

●RMP relays request to System Resource Manager (SRM)

●SRM configures LC filters for interface●Arriving packets directed to slice,

which is listening on socket

15

Setting Up a Fast Path

●Request fastpath through RMP●SRM allocates fastpath●Specify logical interfaces and

interface bandwidths●Specify #of filters, queues, binding

of queues to interfaces, queue lengths and bandwidths

●Configure fastpath filters

Chassis Switch

CP

ExternalSwitch

netFPGA

GPE GPE NPE

LC

SRM

RMP

lookupparse headerformat

queues

...

16

Displaying Real-Time Data

Fastpath maintains traffic counters and queue lengths

To display traffic data» configure an external TCP port» run sliced within your vServer, using configured port

• sliced --ip 64.57.23.194 --port 3552 &» on remote machine, run SPPmon.jar

• use provided GUI to setup monitoring displays• SPPmon configures sliced, which polls the NPE running the

fastpath• SCD-N reads and returns counter values to sliced• can also display data written to file within your vServer

NPE

CP

GPE

GPE

Line Card

SCD

sliced

SPPmon

17

Command Line Tools

scfg – general slice configuration tool» scfg –-cmd make_resrv (cancel_resrv, get_resrvs,..)» scfg –-cmd claim_resources» scfg –-info get_ifaces (...)» scfg –-cmd setup_sp_endpoint (setup_fp_tunnel,..)» same features (and more) available thru C/C++ API

sliced – monitor traffic and display remotelyip_fpd – fastpath daemon for with IPv4 fastpath

» handles exception packets (e.g. ICMP ping)» can extend to add features (e.g. bw reservation option)

ip_fpc – fastpath configuration tool for IPv4 fastpath

18

Forest Overlay Network

Overlay for real-time distributed applications» large online virtual worlds» distributed cyber-physical systems

Large distributed sessions» endpoints issue periodic status reports» and subscribe to dynamically changing sets of reports» requires real-time, non-stop data delivery, even as

communication pattern changesPer-session provisioned channels (comtrees)

» unicast data delivery with route learning» dynamic multicast subscriptions using multicast core

overlayrouter

client

serveraccess

connection

comtree

19

Unicast Addressing and Forwarding

Every comtree has its own topology and routesTwo level unicast addresses

» zip code and endpoint numberNodes with same zip code form subtree

» all nodes in “foreign” zips reached through same branchUnicast routing

» table entry for each foreign zip code and local endpoint

» if no route table entry, broadcastand set route request flag

» first router with a route responds

1

5

4

3

2

3.1

20

Multicast Routing

Flat addresses» on per comtree basis

Hosts subscribe to multicastaddresses to receive packets

Each comtree has “core” subtree» multicast packets go to all core routers» subscriptions propagate packets outside core

Subscription processing» propagate requests towards first core router» stop if intermediate router already subscribed» can subscribe/unsubscribe to many multicasts at once

Core size can be configured to suit application

coresubtree

subscriptionspropagate

towards core

21

Forest Packet Format

Encapsulated in IP/UDP packet» IP (addr,port#) identify logical interface

to Forest routerTypes include

» user data packets» multicast subscribe/unsubscribe» route reply

Flags include» unicast route request

IP/UDP Header(addr,port#) identify

logical interface

ver length type flags

comtree

src adr

dst adr

header err check

payload(≤1400 bytes)

payload err check

22

Basic Demo Setup

WASH

KANS

SALT 10.1.7.*

.2

.2

10.1.1.*

• .1

10.1.3.*

.2

.1.1

64.57.23.*

2.100

3.1001.100.218

planetlabnodes

1.11.2

1.3

.186

2.12.2

2.3

.204

3.13.2

3.3

laptop

23

Simple Unicast Demo

Uses one host at each routerSingle comtree with root at KANSHost 1.1 sends to 2.1 and 3.1

» packet to 3.1 flagged with route-request and “flooded”» router 2.100 responds with route-reply

Host 2.1 sends to 1.1 and 3.1» already has routes to both so no routing updates needed

Host 3.1 sends to 1.1 and 2.1» packet to 1.1 flagged with route-request and “flooded”» router 2.100 responds with route-reply

1.*

2.*

3.*

24

new route from 1.100 (at salt) to

3.0 (at wash)

new route from 3.100 (at salt) to

1.0 (at wash)

2.100 receives flagged packet

from 1.100 to 3.1

sending route-reply

forwarding to 3.1and sendingroute-reply

forwarding to 3.1and sendingroute-reply

use UserData to display data in

stats files on GPEs

run script to launch Forest

routers and hosts

25

Setting up & Running the DemoPrepare Forest router configuration files

» config info for Forest links, comtrees, routes, statisticsPrepare/save SPPmon config – specify chartsReserve SPP resources

» bandwidth on four external interfaces (one for sliced)Start session

» claim reserved resources» setup communication endpoint for router logical interfaces

and for sliced to report statistics» start sliced on SPP and SPPmon on laptop

Start Forest routers & hosts, then observe traffic» done remotely from laptop, using shell script

26

Reservation Script

On spp, execute reservation 03151000 03152200#! /bin/bashcat >res_file.xml <<foobar<?xml version="1.0" encoding="utf-8" standalone="yes"?><spp> <rsvRecord> <rDate start="2010${1}00" end="2010${2}00" /> <plRSpec> <ifParams> <ifRec bw="10000" ip="64.57.23.186" /> ... </ifParams> </plRSpec> </rsvRecord></spp>foobarscfg --cmd make_resrv --xfile res_file.xml

reservation start and end times

(mmddhhmm) GMT

reserve interface

bandwidth (4 of these)

invoke scfg on reservation file

copy reservation to a file

27

Setup Script

On spp, execute setup#! /bin/bash# claim reserved resourcesscfg --cmd claim_resources# configure interfaces, binding port numbersscfg –cmd setup_sp_endpoint –bw 10000 –ipaddr 10.1.1.1 --proto 17 –port 30123scfg –cmd setup_sp_endpoint –bw 10000 –ipaddr 10.1.3.1 --proto 17 –port 30123scfg –cmd setup_sp_endpoint –bw 10000 –ipaddr 64.57.23.186 --proto 17 –port 30123scfg –cmd setup_sp_endpoint –bw 2000 –ipaddr 64.57.23.182 --proto 6 –port 3551# run monitoring daemoncat </dev/null >statssliced –ip 64.57.23.182 &

interfaces to other

SPPs

“public” interface

interface for traffic

monitoring

28

Run Demo On laptop, run script fdemo1

#! /bin/shtlim=50 # time limit for hosts and routers (sec)dir=fdemo1 # directory in which code is executed...# public ip addresses used by forest routersr1ip=64.57.23.218 ...# names and addresses of planetlab nodes used as forest hostsh11host=planetlab6.flux.utah.eduh11ip=155.98.35.7 ...ssh ${sppSlice}@${salt} fRscript ${dir} 1.100 ${tlim} &...sleep 2ssh ${plabSlice}@${h11host} fHscript ${dir} ${h11ip} ${r1ip} 1.1 ${rflg} ${minp} ${tlim} &...

29

Basic Multicast Demo

Uses four hosts per router» one source, three receivers

Comtree centered at each Forest router» each source sends to a multicast group on each comtree

Phase 1 – uses comtree 1» each receiver subscribes to multicast 1, then 2 and 3;

then unsubscribes – 3 second delay between changes» receivers all offset from each other

Phases 2 and 3 are similar» use comtrees 2 and 3, so different topology

1.*

2.*

3.*

30

run script to launch Forest

routers and hosts

all hosts subscribe in turn to three multicasts

on comtree 1

salt to kans traffic

salt to wash traffic