Bringing Experimenters to GENI with the Transit Portal
Vytautas Valancius, Hyojoon Kim, Nick FeamsterGeorgia Tech
(with Jennifer Rexford and Aki Nakao)
2
Talk Agenda• Motivation: Custom routing for each experiment• Demonstration• How you can connect to Transit Portal• Experiment Ideas
– Anycast– Service Migration– Flexible Peering
• Using Transit Portal in Education– Example problem set
• Summary and Breakout Ideas
3
Networks Use BGP to Interconnect
Route Advertisement
Autonomous Systems
Session
Traffic
4
Virtual Networks Need BGP Too• Strawman
– Default routes– Public IP address
• Problems– Experiments may need
to see all upstream routes– Experiments may need
more control overtraffic
• Need “BGP”– Setting up individual
sessions is cumbersome– …particularly for transient
experiments
ISP 1 ISP 2
BGP Sessions
GENI
5
• Obtain connectivity to upstream ISPs– Physical connectivity– Contracts and routing sessions
• Obtain the Internet numbered resources from authorities
• Expensive and time-consuming!
Route Control Without Transit Portal
6
Route Control with Transit Portal
Experiment Facility
Experiment 1
Experiment 2
Internet
ISP1
ISP2Virtual Router
B
Virtual Router
ATransit Portal
Routes
Packets
Full Internet route control to hosted cloud services!
7
Connecting to the Transit Portal
• Separate Internet router for each service– Virtual or physical routers
• Links between service router and TP– Each link emulates connection to upstream ISP
• Routing sessions to upstream ISPs– TP exposes standard BGP route control interface
8
Transit Portal
Virtual BGP
Router
Basic Internet Routing with TP
• Experiment with two upstream ISPs
• Experiment can re-route traffic over one ISP or the other, independently of other experiments
ISP 1 ISP 2
Interactive Cloud Service
BGPSessions
Traffic
9
Current TP Deployment
• Server with custom routing software– 4GB RAM, 2x2.66GHz Xeon cores
• Three active sites with upstream ISPs– Atlanta, Madison, and Princeton
• A number of active experiments– BGP poisoning (University of Washington)– IP Anycast (Princeton University)– Advanced Networking class (Georgia Tech)
10
Demonstration of Transit Portal
11
Demonstration Setup
TransitPortalGT
(AS 2637)
VPNTunneling Virtual
Router
: BGP connectivity
Client network:168.62.21.0/24
Private AS
65002
Public AS
47065
Looking-glass Server
Traceroute
route-server.ip.att.net
12
1. Pick a device which will be the virtual router (Linux)
2. Request for needed resources & provide information For tunneling: CA certificate, client certificate & key
Get prefixes that the client will announce
3. Make tunneling connection with Transit Portal
4. Set up BGP daemon in virtual router (e.g. Quagga)
5. Make proper changes to routing table if necessary
6. Check BGP announcements & connectivity (BGP table)... and you
are good to go!
Setting Up Peering with TP
13
Experiments Using Transit Portal
14
Experiment 1: IP Anycast
• Internet services require fast name resolution
• IP anycast for name resolution– DNS servers with the same IP address– IP address announced to ISPs in multiple locations– Internet routing converges to the closest server
• Available only to large organizations
15
ISP1
ISP2
ISP3
ISP4
Transit Portal
Transit Portal
Asia North America
Anycast Routes
Name ServiceName Service
IP Anycast• Host service at multiple locations (e.g., on ProtoGENI)• Direct traffic to one instance of the service or another using anycast
16
• Internet services in geographically diverse data centers
• Operators migrate Internet user’s connections
• Two conventional methods:– DNS name re-mapping
• Slow– Virtual machine migration with local re-routing
• Requires globally routed network
Experiment 2: Service Migration
17
ISP1
ISP2
ISP3
ISP4
Transit Portal
Transit Portal
Asia North America
Tunneled Sessions
Active GameService
Internet
Service Migration
18
Experiment 3: Flexible Peering
Hosted service can quickly provision services in the cloud when demand fluctuates.
19
Using TP in Courses
20
• Used in “Next-Generation Internet” Course at Georgia Tech in Spring 2010
• Students set up virtual networks and connect directly to TP via OpenVPN (similar to demonstration)– Live feed of BGP routes– Routable IP addresses for in class topology inference
and performance measurements
Using TP in Your Courses
21
Example Problem Set
• Set up virtual network with– Intradomain routing– Hosted services– Rate limiting
• Connect to Internet with Transit Portal
22
Ongoing Developments
• More deployment sites– Your help is desperately needed
• Integrating TP with network research testbeds (e.g., GENI, CoreLab)
• Faster forwarding (NetFPGA, OpenFlow)
• Lightweight interface to route control
23
Conclusion• Limited routing control for hosted services
• Transit Portal gives wide-area route control– Advanced applications with many TPs
• Open-source implementation– Scales to hundreds of client sessions
• The deployment is real– Can be used today for research and education– More information http://valas.gtnoise.net/tp
24
Transit Portal in the News
25
Breakout Session Agenda
• Q & A• Demonstration Redux• Brainstorming Experiments
– MeasuRouting: Routing-Assisted Traffic Monitoring– Pathlet Routing and Adaptive Multipath Algorithms – Aster*x: Load-Balancing Web Traffic over Wide-Area
Networks – Migrating Enterprises to Cloud-based Architectures
26
Extra Slides
27
Scaling the Transit Portal
• Scale to dozens of sessions to ISPs and hundreds of sessions to hosted services
• At the same time:– Present each client with sessions that have an
appearance of direct connectivity to an ISP
– Prevented clients from abusing Internet routing protocols
28
Conventional BGP Routing
• Conventional BGP router:– Receives routing updates from peers– Propagates routing update about one
path only– Selects one path to forward packets
• Scalable but not transparent or flexible
ISP1 ISP2
BGP Router
Updates
Client BGP
Router
Client BGP
Router
Packets
29Bulk Transfer
Routing Process
Scaling TP Memory Use
• Store and propagate all BGP routes from ISPs– Separate routing tables
• Reduce memory consumption– Single routing process -
shared data structures– Reduce memory use from
90MB/ISP to 60MB/ISP
ISP1 ISP2
Virtual Router
Virtual Router
Routing
Table 1
Routing
Table 2
Interactive Service
30Bulk Transfer
Routing Process
Scaling TP CPU Use
• Hundreds of routing sessions to clients– High CPU load
• Schedule and send routing updates in bundles – Reduces CPU from 18% to
6% for 500 client sessions
ISP1 ISP2
Virtual Router
Virtual Router
Routing
Table 1
Routing
Table 2
Interactive Service
31
Forwarding Table
Scaling Forwarding Memory
• Connecting clients– Tunneling and VLANs
• Curbing memory usage– Separate virtual routing tables
with default to upstream– 50MB/ISP -> ~0.1MB/ISP
memory use in forwarding table
ISP1 ISP2
Virtual BGP
Router
Virtual BGP
Router
Forwarding Table
1Forwardng Table 2
Bulk TransferInteractive Service