22
SDN Seminar, ETH Zurich Anwar Hithnawi | June, 2016 FlowVisor: A Network Virtualization Layer SDN Seminar

FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|SDN Seminar, ETH Zurich Anwar Hithnawi | June, 2016

FlowVisor: A Network Virtualization Layer

SDN Seminar

Page 2: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ What is Network Virtualization? Representation of one or more logical network topologies on the same infrastructure § Arbitrary Virtual Topologies

§ Each of the players can create their own view of the network § Should not interfere with other logical topologies § Require resources isolation

2

Network Virtualization

Page 3: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ Applications of Virtual Networking § Multi-tenancy

§ Public/Private cloud service providers § Dynamic scaling of recourses

§ Can allocate from pool of recourses§ VM migration/provisioning

§ Experimentation on production networks § Can run (virtual) experimental infrastructure in parallel with production

§ Rapid deployment and development § Can deploy services independently from underlying vendor hardware

§ Instantiations of Virtual Networks§ VLAN, Andromeda (SDN based), Switchless, VINI, Cabo ...

3

Network Virtualization

Page 4: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ Realistically evaluating new network services is hard§ Services that require changes in switches or routers § Many good ideas don’t gets deployed

4

FlowVisorCan the Production Network Be the Testbed?

Rob Sherwood⇤, Glen Gibb†, Kok-Kiong Yap†, Guido Appenzeller ‡,Martin Casado⇧, Nick McKeown†, Guru Parulkar†

⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch Networks, Palo Alto, CA

AbstractA persistent problem in computer network research isvalidation. When deciding how to evaluate a new featureor bug fix, a researcher or operator must trade-off real-ism (in terms of scale, actual user traffic, real equipment)and cost (larger scale costs more money, real user traf-fic likely requires downtime, and real equipment requiresvendor adoption which can take years). Building a realis-tic testbed is hard because “real” networking takes placeon closed, commercial switches and routers with spe-cial purpose hardware. But if we build our testbed fromsoftware switches, they run several orders of magnitudeslower. Even if we build a realistic network testbed, itis hard to scale, because it is special purpose and is inaddition to the regular network. It needs its own loca-tion, support and dedicated links. For a testbed to haveglobal reach takes investment beyond the reach of mostresearchers.

In this paper, we describe a way to build a testbedthat is embedded in—and thus grows with—the net-work. The technique—embodied in our first prototype,FlowVisor—slices the network hardware by placing alayer between the control plane and the data plane. Wedemonstrate that FlowVisor slices our own productionnetwork, with legacy protocols running in their ownprotected slice, alongside experiments created by re-searchers. The basic idea is that if unmodified hardwaresupports some basic primitives (in our prototype, Open-Flow, but others are possible), then a worldwide testbedcan ride on the coat-tails of deployments, at no extra ex-pense. Further, we evaluate the performance impact anddescribe how FlowVisor is deployed at seven other cam-puses as part of a wider evaluation platform.

1 Introduction

For many years the networking research community hasgrappled with how best to evaluate new research ideas.

WhiteboardPlan

C/C++/Java

NS2OPNetCustom

VINIEmulab

VMsFlowVisor

VendorAdoption

Today,no clear path to

deployment

???

Desig

n

Sim

ula

te

Test

Deploy

in Slice

Dep

loy

This

Paper

Today

Control Realism

Figure 1: Today’s evaluation process is a continuumfrom controlled but synthetic to uncontrolled but realistictesting, with no clear path to vendor adoption.

Simulation [17, 19] and emulation [25] provide tightlycontrolled environments to run repeatable experiments,but lack scale and realism; they neither extend all theway to the end-user nor carry real user traffic. Specialisolated testbeds [10, 22, 3] allow testing at scale, andcan carry real user traffic, but are usually dedicated to aparticular type of experiment and are beyond the budgetof most researchers.

Without the means to realistically test a new idea therehas been relatively little technology transfer from the re-search lab to real-world networks. Network vendors areunderstandably reluctant to incorporate new features be-fore they have been thoroughly tested at scale, in realisticconditions with real user traffic. This slows the pace ofinnovation, and many good ideas never see the light ofday.

Peeking over the wall to the distributed systems com-munity, things are much better. PlanetLab has proved in-valuable as a way to test new distributed applications atscale (over 1,000 nodes worldwide), realistically (it runsreal services, and real users opt in), and offers a straight-forward path to real deployment (services developed in aPlanetLab slice are easily ported to dedicated servers).

In the past few years, the networking research commu-nity has sought an equivalent platform, funded by pro-

1

A switch virtualization approach: hardware forwarding plane can be shared among multiple logical networks

Page 5: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

| 5

Virtualization

86x Instruction Set

Win

Hypervisor

Linux MAC OS

CPU, Mem, PIC, I/O

Abstraction Layer

Slice

Virtualization

Slice Slice

Hardware Recourses

OpenFlow

NOX

FlowVisor

NOX NOX

Bandwidth, CPUTopology, FIB

VN VN VNVM VM VM

Page 6: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ Divide the network into logical slices § Each slice controls its own packet forwarding

§ Subset of the traffic (flowspace) defined by packet headers§ Each slice corresponds to a subset of network resources

§ Defined by slicing policy § Enforce strong isolation between slices§ Each slice believes it owns the data path

6

Network Slicing

FlowVisor (Policy Virtualization): who sets ACL? Who decides forwarding paths?

Page 7: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ Transparent slicing layer § Strong isolation between slices§ Extensible slice definition

7

FlowVisor: Design Goals

Page 8: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ Data plane and controller are unmodified§ OpenFlow for northbound and southbound interfaces

§ Rewrites, drops rules to adhere to slice policy§ Forwards exceptions to correct slice(s)

§ Decouple virtualization from control and let them evolve independently

8

Transparent Slicing Layer

Translation

Isolation Enforcement

ResourceAllocation

Policy

Alice's Slice Def.

Bob'sSlice Def.

Cathy's Slice Def.

Alice's Controller

Bob's Controller

Cathy's Controller

FlowVisor

1

2

34

Switch

Figure 4: The FlowVisor intercepts OpenFlow messagesfrom guest controllers (1) and, using the user’s slicingpolicy (2), transparently rewrites (3) the message to con-trol only a slice of the network. Messages from switches(4) are forwarded only to guests if it matches their slicepolicy.

number of forwarding rules (e.g., TCAM entries). Fail-ure to isolate forwarding entries between slices might al-low one slice to prevent another from forwarding pack-ets.

3.2 Flowspace and Opt-In

A slice controls a subset of traffic in the network. Thesubset is defined by a collection of packet headers thatform a well-defined (but not necessarily contiguous) sub-space of the entire space of possible packet headers. Ab-stractly, if packet headers have n bits, then the set ofall possible packet header forms an n-dimensional space.An arriving packet is a single point in that space repre-senting all packets with the same header. Similar to thegeometric representation used to describe access controllists for packet classification [14], we use this abstrac-tion to partition the space into regions (flowspace) andmap those regions to slices.

The flowspace abstraction helps us manage users whoopt-in. To opt-in to a new experiment or service, userssignal to the network administrator that they would liketo add a subset of their flows to a slice’s flowspace. Userscan precisely decide their level of involvement in an ex-periment. For example, one user might opt-in all of theirtraffic to a single experiment, while another user mightjust opt-in traffic for one application (e.g., port 80 forHTTP), or even just a specific flow (by exactly specify-ing all of the fields of a header). In our prototype theopt-in process is manual; but in a ideal system, the userwould be authenticated and their request checked auto-matically against a policy.

For the purposes of testbed we concluded flow-levelopt-in is adequate—in fact, it seems quite powerful. An-other approach might be to opt-in individual packets,which would be more onerous.

3.3 Control Message Slicing

By design, FlowVisor is a slicing layer interposed be-tween data and control planes of each device in the net-work. In implementation, FlowVisor acts as a transpar-ent proxy between OpenFlow-enabled network devices(acting as dumb data planes) and multiple OpenFlowslice controllers (acting as programmable control logic—Figure 4). All OpenFlow messages between the switchand the controller are sent through FlowVisor. FlowVi-sor uses the OpenFlow protocol to communicate upwardsto the slice controllers and and downwards to OpenFlowswitches. Because FlowVisor is transparent, the slicecontrollers require no modification and believe they arecommunicating directly with the switches.

We illustrate the FlowVisor’s operation by extend-ing the example from §2 (Figure 4). Recall that a re-searcher, Bob, has created a slice that is an HTTP proxydesigned to spread all HTTP traffic over a set of webservers. While the controller will work on any HTTPtraffic, Bob’s FlowVisor policy slices the network sothat he only sees traffic from users that have opted-into his slice. His slice controller doesn’t know the net-work has been sliced, so doesn’t realize it only sees asubset of the HTTP traffic. The slice controller thinksit can control, i.e., insert flow entries for, all HTTP traf-fic from any user. When Bob’s controller sends a flowentry to the switches (e.g., to redirect HTTP traffic toa particular server), FlowVisor intercepts it (Figure 4-1), examines Bob’s slice policy (Figure 4-2), and re-writes the entry to include only traffic from the allowedsource (Figure 4-3). Hence the controller is controllingonly the flows it is allowed to, without knowing that theFlowVisor is slicing the network underneath. Similarly,messages that are sourced from the switch (e.g., a newflow event—Figure 4-4) are only forwarded to guest con-trollers whose flowspace match the message. That is, itwill only be forwarded to Bob if the new flow is HTTPtraffic from a user that has opted-in to his slice.

Thus, FlowVisor enforces transparency and isolationbetween slices by inspecting, rewriting, and policingOpenFlow messages as they pass. Depending on the re-source allocation policy, message type, destination, andcontent, the FlowVisor will forward a given message un-changed, translate it to a suitable message and forward,or “bounce” the message back to its sender in the formof an OpenFlow error message. For a message sentfrom slice controller to switch, FlowVisor ensures thatthe message acts only on traffic within the resources as-signed to the slice. For a message in the opposite di-rection (switch to controller), the FlowVisor examinesthe message content to infer the corresponding slice(s)to which the message should be forwarded. Slice con-trollers only receive messages that are relevant to their

5

Page 9: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ The policy specifies recourse budget for each slice§ Flexible, extensible, and modular § Pluggable module: text configuration file per slice

§ Network resources § Link bandwidth: Each slice get a fraction of the link bandwidth

§ Slice is mapped to a single QoS class § Number of forwarding rules: e.g., finite TCAM entries – constant budget § Switch CPU: Fraction of the computational recourses – constant budget § Topology: Each slice has its own view of the networks nodes

§ Specified as list of network nodes and ports § FlowSpace: Which set of packets does this slice control

§ Packet header 9

Resource Allocation: Slicing Policies

Page 10: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ Which set of packets does this slice control?§ Set of flows that make up a slice§ Each slice has forwarding control of a specific set of packets

(specified by packet header fields)§ Ways to slice the network: switch port (L1), src/dst Ethernet address

(L2), IP (L3), TCP/UDP port (L4)§ Flow spaces are described using ordered ACL-like rules

§ fvctl addFlowSpace <dpid> <priority> <match><actions>

10

FlowSpace

Page 11: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ Isolation is critical for virtualization

§ Device CPU § Ensures no slice monopolizes device CPU§ Avoids CPU exhaustion

§ Limits rule insertion§ Uses periodic drop-rules to throttle exceptions

§ Link bandwidth § Assigns a minimum data rate to the set of flows that makeup a slice§ OpenFlow does not expose QoS queues § Workaround: Leverages VLAN priority bits: Priority Code Point (BCP), 8

priority classes

11

Isolation Techniques

Page 12: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ FlowSpace§ Re-writes rules to be more specific

§ Forwarding rules§ Partitions the flow-table in each switch by keeping track of which flow-

entries belong to each guest controller§ Make sure that each slice does not exceed its flow entry limit § Maintains counter for each slice, that are updated by insertions and

expirations of rules

12

Isolation Techniques

Page 13: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

FlowVisor Implemented on OpenFlow

Custom ControlPlane

Stub ControlPlane

Data Plane

OpenFlowProtocol

Switch/Router

Server

Network

Switch/Router

Servers

OpenFlowFirmware

Data Path

OpenFlowController

Switch/RouterSwitch/Router

OpenFlowFirmware

Data Path

OpenFlowController

OpenFlowController

OpenFlowController

FlowVisor

OpenFlow

OpenFlow

13

Page 14: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

FlowVisor Message Handling

OpenFlowFirmware

Data Path

AliceController

BobController

CathyController

FlowVisorOpenFlow

OpenFlow

Packet

Exception

Policy Check:Is this rule allowed?

Policy Check:Who controls this packet?

Full Line RateForwarding

Rule

Packet

14

Page 15: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ FlowVisor can recursively slice an already sliced network, creating hierarchies of FlowVisor

15

Recursive Slicing

Switch Switch Switch Switch Switch

FlowVisor FlowVisor

FlowVisorAlice's

Controller

Bob's Controller

Cathy's Controller

Eric's Controller

4 4

5 5

Key:

OpenFlow

Connection

Figure 5: FlowVisor can trivially recursively slice an al-ready sliced network, creating hierarchies of FlowVisors.

network slice. Thus, from a slice controller’s perspec-tive, FlowVisor appears as a switch (or a network ofswitches); from a switch’s perspective, FlowVisor ap-pears as a controller.

FlowVisor does not require a 1-to-1 mapping betweenFlowVisor instances and physical switches. One FlowVi-sor instance can slice multiple physical switches, andeven re-slice an already sliced network (Figure 5) .

3.4 Slice Definition Policy

The slice policy defines the network resources, flows-pace, and OpenFlow slice controller allocated to eachslice. Each policy is described by a text configurationfile—one file per slice. In terms of resources, the policydefines the fraction of total link bandwidth available tothis slice (§4.3) and the budget for switch CPU and for-warding table entries. Network topology is specified as alist of network nodes and ports.

The flowspace for each slice is defined by an orderedlist of tuples similar to firewall rules. Each rule descrip-tion has an associated action, e.g., allow, read-only, ordeny, and is parsed in the specified order, acting on thefirst matching rule. The rules define the flowspace a slicecontrols. Read-only rules allow slices to receive Open-Flow control messages and query switch statistics, butnot to write entries into the forwarding table. Rules areallowed to overlap, as described in the example below.

Let’s take a look at an example set of rules. Alice, thenetwork administrator, wants to allow Bob to conduct anHTTP load-balancing experiment. Bob has convincedsome of his colleagues to opt-in to his experiment. Al-ice wants to maintain control of all traffic that is not partof Bob’s experiment. She wants to passively monitor allnetwork performance, to keep an eye on Bob and the pro-duction network.

Here is a set of rules Alice could install in the FlowVi-sor:

Bob’s Experimental Network includes all HTTP trafficto/from users who opted into his experiment. Thus, hisnetwork is described by one rule per user:

Allow: tcp port:80 and ip=user ip.OpenFlow messages from the switch matching any ofthese rules are forwarded to Bob’s controller. Any flowentries that Bob tries to insert are modified to meet theserules.

Alice’s Production Network is the complement of Bob’snetwork. For each user in Bob’s experiment, the produc-tion traffic network has a negative rule of the form:Deny: tcp port:80 and ip=user ip. Theproduction network would have a final rule that matchesall flows: Allow: all.Thus, only OpenFlow messages that do not go to Bob’snetwork are sent to the production network controller.The production controller is allowed to insert forwardingentries so long as they do not match Bob’s traffic.

Alice’s Monitoring Network is allowed to see all trafficin all slices. It has one rule, Read-only: all.

This rule-based policy, though simple, suffices for theexperiments and deployment described in this paper. Weexpect that future FlowVisor deployments will have morespecialized policy needs, and that researchers will createnew resource allocation policies.

4 FlowVisor Implementation

We implemented FlowVisor in approximately 8000 linesof C and the code is publicly available for downloadfrom www.openflow.org. The notable parts of the im-plementation are the transparency and isolation mech-anisms. Critical to its design, FlowVisor acts as atransparent slicing layer and enforces isolation betweenslices. In this section, we describe how FlowVisorrewrites control messages—both down to the forwardingplane and up to the control plane—to ensure both trans-parency and strong isolation. Because isolation mech-anisms vary by resource, we describe each resource inturn: bandwidth, switch CPU, and forwarding table en-tries. In our deployment, we found that the switch CPUwas the most constrained resource, so we devote partic-ular care to describing its slicing mechanisms.

4.1 Messages to Control PlaneFlowVisor carefully rewrites messages from the Open-Flow switch to the slice controller to ensure transparency.First, FlowVisor only sends control plane messages toa slice controller if the source switch is actually in theslice’s topology. Second, FlowVisor rewrites Open-Flow feature negotiation messages so that the slice con-troller only sees the physical switch ports that appearin the slice. Third, OpenFlow port up/port down mes-sages are similarly pruned and only forwarded to the af-fected slices. Using these message rewriting techniques,

6

Page 16: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ Performance Overhead: New Flow Latency

16

Evaluation

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

1

0.1 1 10 100

Cum

ulat

ive P

roba

bility

OpenFlow New Flow Latency (ms)

Avg overhead:16.16 ms

without FlowVisorwith FlowVisor

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

1

0.1 1 10 100

Cum

ulat

ive P

roba

bility

OpenFlow Port Status Latency (ms)

Avg overhead:0.483ms

without FlowVisorwith FlowVisor

Figure 7: CDF of slicing overhead for OpenFlow new flow messages and port status requests.

message per slice to remove statistics for ports that donot appear in a sliced topology.

We wrote a special-purpose controller that sent ap-proximately 200 port status requests per second and mea-sured the response times. The rate was chosen to ap-proximate the maximum request rate supported by thehardware. The controller, switch, and FlowVisor wereall on the same local area network, but controller andFlowVisor were hosted on separate PCs. Obviously, theoverhead can be increased by moving the FlowVisor ar-bitrarily far away from the controller, but we design thisexperiment to quantify the FlowVisor’s processing over-head. Our results show that adding the FlowVisor causesan average overhead for port status responses of 0.48 mil-liseconds(Figure 7(b)). We believe that port status re-sponse time being faster than new flow processing timeis not inherent, but simply a matter of better optimizationfor port status request handling.

5.3 Isolation5.3.1 Bandwidth

To validate the FlowVisor’s bandwidth isolation prop-erties, we run an experiment where two slices competefor bandwidth on a shared link. We consider the worstcase for bandwidth isolation: the first slice sends TCP-friendly traffic and the other slice sends TCP-unfriendlyconstant-bit-rate (CBR) traffic at full link speed (1Gbps).We believe these traffic patterns are representative of ascenario where production slice (TCP) shares a link with,for example, a slice running a DDoS experiment (CBR).

This experiment uses 3 machines—two sources and acommon sink—all connected via the same HP ProCurve5400 switch, i.e., the switch found in our wiring closet.The traffic is generated by iperf in TCP mode for theTCP traffic and UDP mode at 1Gbps for the CBR traffic.We repeat the experiment twice: with and without theFlowVisor’s bandwidth isolation features enabled (Fig-ure 8(a)). With the bandwidth isolation disabled (“with-out Slicing”), the CBR traffic consumes nearly all the

bandwidth and the TCP traffic averages 1.2% of the linkbandwidth. With the traffic isolation features enabled(“with 30/70% reservation”), the FlowVisor maps theTCP slice to a QoS class that guarantees at least 70%of link bandwidth and maps the CBR slice to a class thatguarantees at least 30%. Note that theses are minimumbandwidth guarantees, not maximum. With the band-width isolation features enabled, the TCP slice achievesan average of 64.2% of the total bandwidth and the CBRan average of 28.5%. Note that the event at 20 secondswhere the CBR with QoS jumps and the TCP with QoSexperiences a corresponding dip. We believe this to bethe result of a TCP congestion event that allowed theCBR traffic to temporarily take advantage of additionalavailable bandwidth, exactly as the minimum bandwidthqueue is designed.

5.3.2 Switch CPU

To quantify our ability to isolate the switch CPU re-source, we show two experiments that monitor CPU-usage over time of a switch with and without isolationenabled. In the first experiment (Figure 8(b)), the Open-Flow controller maliciously sends port stats request mes-sages (as above) at increasing speeds (2, 4, 8 . . . 1024requests per second). In our second experiment (Fig-ure 8(c)), the switch generates new flow messages fasterthan its CPU can handle and a faulty controller does notadd a new rule to match them. In both experiments, weshow the switch’s CPU utilization averaged over one sec-ond, and the FlowVisor’s isolation features reduce theswitch utilization from 100% to a configurable amount.In the first experiment, we note that the switch could han-dle less than 256 port status requests without appreciableCPU load, but immediately goes to 100% load when therequest rate hits 256 requests per second. In the secondexperiment, the bursts of CPU activity in Figure 8(c) isa direct result of using null forwarding rules (§4.4) torate limit incoming new flow messages. We expect thatfuture versions of OpenFlow will better expose the hard-ware CPU limiting features already in switches today.

10

Page 17: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ Performance Overhead: Port Status Latency

17

Evaluation

Page 18: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ Isolation: Malicious Slice

18

Evaluation

Page 19: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ SDN vs. Virtual Networks§ SDN separates data plane and control plane§ SDN does not inherently abstract the details of the physical

networks § Virtual networks separate logical network topologies§ SDN can be useful tool to implement virtual networks

19

Network Virtualization

Page 20: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ Network slicing can help perform more realistic evaluations

§ FlowVisor is an OpenFlow-based virtualization tool§ It allows experiments to run concurrently but safely on the

production network

§ OpenFlow for northbound and southbound interfaces§ Decouple virtualization from control and let them evolve

independently

20

Conclusion

Page 21: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

|

§ Weak Aspects and Discussion Points § Performance overhead§ Isolation techniques § Rewrite rules§ FlowVisor scalability

21

SDN Seminar Reviews

Page 22: FlowVisor:A Network Virtualization Layer · ⇤ Deutsche Telekom Inc. R&D Lab, Los Altos, CA† Stanford University, Palo Alto, CA ⇧ Nicira Networks, Palo Alto, CA ‡ Big Switch

| 22

Recourses

[1] Rob Sherwood, Glen Gibb, Kok-Kiong Yap, Guido Appenzeller, Martin Casado, Nick McKeown, Guru Parulkar. Technical Report. 2009. FlowVisor: A Network Virtualization Layer.

[2] Rob Sherwood, Glen Gibb, Kok-Kiong Yap, Guido Appenzeller, Martin Casado, Nick McKeown, Guru Parulkar. OSDI. 2010. Can the Production Network Be the Testbed?