35
1 Tesseract* A 4D Network Control Plane Hong Yan, David A. Maltz, T. S. Eugene Ng Hemant Gogineni, Hui Zhang, Zheng Cai *Tesseract is a 4-dimensional cube

11 Tesseract* A 4D Network Control Plane Hong Yan, David A. Maltz, T. S. Eugene Ng Hemant Gogineni, Hui Zhang, Zheng Cai *Tesseract is a 4-dimensional

Embed Size (px)

Citation preview

11

Tesseract* A 4D Network Control Plane

Hong Yan, David A. Maltz, T. S. Eugene NgHemant Gogineni, Hui Zhang, Zheng Cai

*Tesseract is a 4-dimensional cube

22

Split load between S5 and S6Shut down S6 for maintenance on May 1

forwarding state

Ideally…

Managing network in a simple way Directly and explicitly apply policies to network

accurate network view

S1

S2 S3 S4

S5 S6

InternetInternet

33

Probe routers to fetch configuration Monitor control traffic (e.g., LSAs, BGP update)

probe routers and guess network view

S1

S2 S3 S4

S5 S6

InternetInternet

Indirect Control - Fact #1:Infer network view by reverse engineering

??

?? ?

44

Change OSPF link weights on S2, S3, S4..Modify routing policies on S2, S3, S4…

configuration commands

Many knobs to tune Trial and error

probe routers and guess network view

S1

S2 S3 S4

S5 S6

InternetInternet

??

?? ?

Indirect Control - Fact #2:Policies buried in box-centric configuration

55

Complex configuration is error-prone and is causing network outages

interface Ethernet0 ip address 6.2.5.14 255.255.255.128interface Serial1/0.5 point-to-point ip address 6.2.2.85 255.255.255.252 ip access-group 143 in frame-relay interface-dlci 28

router ospf 64 redistribute connected subnets redistribute bgp 64780 metric 1 subnets network 66.251.75.128 0.0.0.127 area 0router bgp 64780 redistribute ospf 64 match route-map 8aTzlvBrbaW neighbor 66.253.160.68 remote-as 12762 neighbor 66.253.160.68 distribute-list 4 in

access-list 143 deny 1.1.0.0/16access-list 143 permit anyroute-map 8aTzlvBrbaW deny 10 match ip address 4route-map 8aTzlvBrbaW permit 20 match ip address 7ip route 10.2.2.1/16 10.2.1.7

66

Indirect Control - Fact #3:Indirect Control Creates Subtle Dependencies

Example: Policy #1: use C as egress point for traffic from AS X Policy #2: enable ECMP for A-C flow

AS Y

1

1 2

3

31AS X

1

4

Desired Unexpected!

CB

A

D

77

Direct Control: A New World

Express goals explicitly Security policies, QoS, egress point selection Do not bury goals in box-specific configuration Make policy dependencies explicit

Design network to provide timely and accurate view Topology, traffic, resource limitations Give decision maker the inputs it needs

Decision maker computes and pushes desired network state FIB entries, packet filters, queuing parameters Simplify router functionality Add new functions without modifying/creating protocols or

upgrading routers

88

D

How can we get there?

Routing Table Access Control Table NAT Table Tunnel Table

Decision Computation ServiceGenerating table entries

Data Plane

Modeled as a set of tables

Install table entries

Discovery

Dissemination ServiceD

D

D

4D

99

Tesseract: A 4D System

Decision Element

Decision Element Dissemination

R2

R1

Hello from R1Hello from R2Discovery

1010

Bootstrapping Dissemination

R1

R3

R2

R4

R5

Beac1: DE1 R3

Beac1: DE1 R3Beac1: DE1 R3 R2

Beac1: DE1 R3 R2Beac1: DE1 R3 R2

Beac1: DE1 R3 R2 R4

Beac1: DE1

Beac1: DE1 R3 R2 R4 R5

DE1

DE2

1111

Bootstrapping Dissemination

R1

R3

R2

R4

R5

DE1

DE2

DE beacons establish ctrl topology LSAs flow back from routers over ctrl

topology After link/switch crash, next beacon

heals topology

1212

Making Decision

R2

DE’s input includes TE goals, reachability matrix DE creates tables for each router (FIB, filters) Tables source-routed to destination via dissemination

R2’s Routing Table:10.0.1/24: R310.0.2/24: R510.0.3/24: eth00/0: R5

1313

Decision/Dissemination Interface

R1

DE1

Decision Plane

Dissemination Plane• Flood (pkt)• Send (pkt, dst)• RegisterUpCall (*fun)

• LinkFailure(link)• PreferredRoute(dst, route)

LSALSA

LSA

1414

Reusable Decision Algorithms

1515

Code Snippet Floyd-Warshall

for (unsigned k = 0; k < num; k++) for (unsigned i = 0; i < num; i++) for (unsigned j = 0; j < num; j++) { if (CostMatrix[i][k] != -1 && CostMatrix[k][j] != -1) if (CostMatrix[i][j] == -1 ||

CostMatrix[i][j] > CostMatrix[i][k] + CostMatrix[k][j] )

{ CostMatrix[i][j] = CostMatrix[i][k] + CostMatrix[k][j];

FirstHopMatrix[i][j] = FirstHopMatrix[i][k];LastHopMatrix[i][j] = LastHopMatrix[k][j];

}

1616

DE1 is aliveDE1 is boss

DE Robustness

R1

DE1

DE2

All DEs send beacons Routers send state updates

to all DEs on network DEs can see each others’

beacons DE with lowest ID is only

one to write configs to routers

If active DE crashes, its beacons stop Next highest ranking DE

takes over

DE1 heard too long ago

I becoming boss

1717

Evaluation

Emulab Topologies

Rocketfuel backbone network (114 nodes, 190 links) with a maximum round trip delay of 250 ms

Production enterprise network (40 nodes, 60 links)

1818

Routing Convergence Experiments

On both backbone and enterprise topologies Failure scenarios

Single link failures Single node failures Regional failures for backbone (failing all nodes in

one city) Link flapping

Tesseract versus Aggressively Tuned OSPF (Fast OSPF)

1919

Enterprise Network, Switch Failures

Tesseract

Fast OSPF

2020

Backbone Network, Switch Failures

TesseractFast OSPF

2121

Backbone Network, Regional Failures

TesseractFast OSPF

2222

Microbenchmark Experiments

A subset of Rocketfuel topologies with varying sizes

Independently fail each link Measure:

DE computation timeControl traffic volume

2323

DE Computation Time

2424

Control Traffic Volume

2525

Tesseract Applications

Joint Control of Packet Routing and Filtering Problem: dynamic routing but static packet filter

placement Solution: in addition to computing routes, DE

computes filter placement based on a reachability matrix

Link Cost Driven Ethernet Switching Problem: Spanning tree switching makes inefficient

use of available links Solution: DE computes both spanning tree and

shortest paths

2626

Link Cost Driven Ethernet Switching: Multi-Tree

2727

RevisitingRandomize Equal-Cost Shortest Path Selection

for (unsigned k = 0; k < num; k++) for (unsigned i = 0; i < num; i++) for (unsigned j = 0; j < num; j++) { if (CostMatrix[i][k] != -1 && CostMatrix[k][j] != -1) if (CostMatrix[i][j] == -1 ||

CostMatrix[i][j] > CostMatrix[i][k] + CostMatrix[k][j]|| CostMatrix[i][j] == CostMatrix[i][k] + CostMatrix[k][j] && rand() > RAND_MAX/2 )

{ CostMatrix[i][j] = CostMatrix[i][k] + CostMatrix[k][j];

FirstHopMatrix[i][j] = FirstHopMatrix[i][k];LastHopMatrix[i][j] = LastHopMatrix[k][j];

}

2828

Link Cost Driven Ethernet Switching: Multi-Tree

2929

Throughput Comparison

3030

Related Work

Separation of forwarding elements and control elements IETF: FORCES, GSMP, GMPLSSoftRouter [Lakshman]

Centralization of decision making logicRCP [Feamster], SANE [Casado]

Alternative frameworks for network controlTempest [Rooney], FIRE [Partridge]

3131

Summary

Direct control is desirableMake sophisticated control policies easier to

understand and deploySimplify router softwareEnable easy innovation

Direct control is implementableTesseract as proof-of-conceptSufficiently scalableFast convergence

3232

Future Work Formulate models that establish bounds of Tesseract

Scale, latency, stability, failure models, objectives Structuring decision logic

Arbitrate among multiple, potentially competing objectives Unify control when some logic takes longer than others

Protocol improvements Better dissemination planes

Tesseract Router Deployment in today’s networks

Data center, enterprise, campus, backbone

3333

Reality

TE/SecurityPolicy

Reverse-engineerRouting Logic

Convert toControl planeconfiguration

Config commands

Access Control NAT TableTunnel Table

EIRGP OSPFBGP

Configuration File

Forwarding Table

Access Control NAT TableTunnel Table

EIRGP OSPFBGP

Configuration File

Forwarding Table

Access Control NAT TableTunnel Table

EIRGP OSPFBGP

Configuration File

Forwarding Table

Indirect control with primitive configuration interface

3434

Link Cost Driven Ethernet Switching: Mesh

3535

Effects of Switch Failure on Aggregated Throughputs