View
218
Download
0
Category
Tags:
Preview:
Citation preview
Hash, Don’t Cache: Fast Packet Forwarding for Enterprise Edge Routers
Minlan YuPrinceton University
minlanyu@cs.princeton.eduJoint work with Jennifer Rexford
1
SIGCOMM WREN’09
Enterprise Edge Router • Enterprise edge routers
– Connects upstream providers and internal routers
• A few outgoing links– A small data structure for each next hop
2
Provider 1 Provider 2
Enterprise Network
Challenges of Packet Forwarding
• Full routes forwarding table (FIB)– For load balancing, fault tolerance, etc.– More than 250K entries, and growing
• Increasing link speed– Over 10 Gbps
• Requires large, expensive memory– Expensive, complicated high-end routers
• More cost-efficient, less power-hungry solution?– Perform fast packet forwarding in a small SRAM
3
Using a Small SRAM
• Route caching is not a viable solution– Store the most frequently used entries in cache– Bad performance during cache miss
• Low throughput and high packet loss
– Bad performance under worst-case workloads• Malicious traffic with a wide range of destinations• Route changes, link failures
• Our solution should be workload independent– Fit the entire FIB in the small SRAM
4
Bloom Filter• Bloom filters in fast memory (SRAM)
– A compact data structure for a set of elements– Calculate s hash functions to store element x– Easy to check membership – Reduce memory at the expense of false positives
h1(x) h2(x) hs(x)01000 10100 00010
x
V0Vm-1
h3(x)
Bloom Filter Forwarding• One Bloom filter (BF) per next hop
– Store all addresses forwarded to that next hop
• Consider flat addresses in the talk– See paper for extensions to longest prefix match
6
Nexthop 1
Nexthop 2
Nexthop T
……Packetdestination
query
Bloom Filters
hit
T is small for enterprise edge routers
Contributions
• Make efficient use of limited fast memory– Formulate and solve optimization problem to
minimize false-positive rate
• Handle false positives– Leverage properties of enterprise edge routers
• Adapt Bloom filters for routing changes– Leverage counting Bloom filter in slow memory– Dynamically adjust Bloom filter size
7
Outline
• Optimize memory usage
• Handle false positives
• Handle routing dynamics
8
Outline
• Optimize memory usage
• Handle false positives
• Handle routing dynamics
9
Memory Usage Optimization• Consider fixed forwarding table• Goal: Minimize overall false-positive rate
– Probability one or more BFs have a false positive
• Input:– Fast memory size M– Number of destinations per next hop– The maximum number of hash functions
• Output: the size of each Bloom filter– Larger BF for next-hops with more destinations
10
Constraints and Solution
• Constraints– Memory constraint
• Sum of all BF sizes fast memory size M
– Bound on number of hash functions• To bound CPU calculation time• Bloom filters share the same hash functions
• Proved to be a convex optimization problem– An optimal solution exists– Solved by IPOPT (Interior Point OPTimizer)
11
€
≤
– The FIB with 200K entries, 10 next hop– 8 hash functions– Takes at most 50 msec to solve the optimization
Evaluation of False Positives
12
Outline
• Optimize memory usage
• Handle false positives
• Handle routing dynamics
13
False Positive Detection
• Multiple matches in the Bloom filters– One of the matches is correct– The others are caused by false positives
14
Nexthop 1
Nexthop 2
Nexthop T
……Packetdestination
query
Bloom Filters Multiple hits
Handle False Positives on Fast Path
• Leverage multi-homed enterprise edge router• Send to a random matching next hop
– Packets can get to the destination even through a less-preferred outgoing link occasionally
– No extra traffic, but may cause packet loss
• Send duplicate packets– Send copy of packet to all matching next hops– Guarantees reachability, but introduce extra traffic
15
Prevent Future False Positives• For a packet that experiences a false positive
– Conventional lookup in the background– Cache the result
• For the subsequent packets – No longer experience false positives
• Compared to conventional route cache– Much smaller (only for false-positive destinations)– Not easily invalidated by an adversary
16
Outline
• Optimize memory usage
• Handle false positives
• Handle routing dynamics
17
Problem of Bloom Filters
• Routing changes– Add/delete entries in BFs
• Problem of Bloom Filters (BF)– Do not allow deleting an element
• Counting Bloom Filters (CBF)– Use a counter instead of a bit in the array– CBFs can handle adding/deleting elements– But, require more memory than BFs
18
Update on Routing Change
• Use CBF in slow memory– Assist BF to handle forwarding-table updates– Easy to add/delete a forwarding-table entry
19
CBF in slow memory
BF in fast memory
Delete a route
Occasionally Resize BF• Under significant routing changes
– Number of addresses in BFs changes significantly– Re-optimize BF sizes
• Use CBF to assist resizing BF– Large CBF and small BF– Easy to expand BF size by contracting CBF
20
1 0
Hard to expand to size 4
CBF BF
Easy to contract CBF to size 4
BF-based Router Architecture
21
Prototype and Evaluation
• Prototype in kernel-level Click• Experiment environment
– 3.0 GHz 64-bit Intel Xeon– 2 MB L2 data cache, used as fast memory size M
• Forwarding table– 10 next hops, 200K entries
• Peak forwarding rate– 365 Kpps for 64 Byte packets– 10% faster than conventional lookup
22
Conclusion• Improve packet forwarding for enterprise edge
routers– Use Bloom filters to represent forwarding table
• Only require a small SRAM– Optimize usage of a fixed small memory
• Multiple ways to handle false positives– Leverage properties of enterprise edge routers
• React quickly to FIB updates– Leverage Counting Bloom Filter in slow memory
23
Ongoing Work: BUFFALO
• Bloom filter forwarding in large enterprise– Deploy BF-based switches in the entire network– Forward all the packets on the fast path
• Gracefully handling false positives – Randomly select a matching next hop– Techniques to avoid loops and bound path stretch
24
www.cs.princeton.edu/~minlanyu/writeup/conext09.pdf
Thanks
• Questions?
25
Recommended