View
222
Download
6
Category
Tags:
Preview:
Citation preview
Green Networking
Jennifer RexfordComputer Science Department
Princeton University
Router Energy Consumption
2
Internet Infrastructure
3
router
link
Router Energy Consumption
• Millions of routers in the U.S.– Several Tera-Watt hours per year– $2B/year electric bill
4
Line cards draw ~ 100W
(Source: National Technical Information Service, Department of Commerce, 2000. Figures for 2005 & 2010 are projections.)
1.1
2.4
3.9
0
1
2
3
4
2000 2005 2010
TwH/year200-400 W
Opportunities to Save Energy
• Networks over-provisioned with extra capacity
• Diurnal shifts in traffic due to user behavior
5
Powering Down the Network
• Equipment is not energy proportional– Energy is nearly independent of load
• Turning off parts of the network– Entire router– Individual interface card
• While avoiding transient disruptions– Data traffic relies on the underlying network– Failures lead to transient packet loss and delay
6Shut down routers and interfaces without disruptions
Brief Background on Routers
7
Router Architecture
8
Switching Fabric
Processor
Line card
Line card
Line card
Line card
Line card
Line card
data plane
control plane
Data, Control, and Management
9
Data Control Management
Time-scale
Packet (nsec)
Event (10 msec to sec)
Human (min to hours)
Tasks Forwarding, buffering, filtering, scheduling
Routing, signaling
Analysis, configuration
Location
Line-card hardware
Router software
Humans or scripts
Data Plane: Router Line Cards
• Interfacing – Physical link– Switching fabric
• Packet handling– Packet forwarding– Decrement time-to-live– Buffer management– Link scheduling– Packet filtering– Rate limiting 10
to/from link
to/from switch
lookup
Rec
eive
Transm
it
Control Plane: Routing Protocols
• Routing protocol– Routers talk amongst themselves– To compute paths through the network
• Routing convergence– After a topology change– Transient period of
disagreement– Packets lost, delayed,
or delivered out-of-order– Major disruptions to application performance 1111
The Rest of the Talk: Two Ideas
• Power down networking equipment– To reduce energy consumption– While minimizing disruption to applications
• Power down a router– Virtual router migration– Similar to virtual machine migration
• Power down an interface– Shutting down cables in a bundled link– Similar to dynamic frequency voltage scaling
12
VROOM: Virtual ROuters On the Move
Joint work with Yi Wang, Eric Keller, Brian Biskeborn, and Kobus van der Merwe (AT&T)
http://www.cs.princeton.edu/~jrex/papers/vroom08.pdf
Virtual ROuters On the Move
• Key idea– Routers should be free to roam around
• Useful for many different applications– Reduce power consumption– Simplify network maintenance– Simplify service deployment and evolution
• Feasible in practice– No performance impact on data traffic– No visible impact on routing protocols
14
The Two Notions of “Router”• IP-layer logical functionality, and physical equipment
15
Logical(IP layer)
Physical
Tight Coupling of Physical & Logical• Root of many network-management challenges (and
“point solutions”)
16
Logical(IP layer)
Physical
VROOM: Breaking the Coupling• Re-mapping logical node to another physical node
17
Logical(IP layer)
Physical
VROOM enables this re-mapping of logical to physical through virtual router migration.
Case 1: Power Savings
18
• Contract and expand the physical network according to the traffic volume
Case 1: Power Savings
19
• Contract and expand the physical network according to the traffic volume
Case 1: Power Savings
20
• Contract and expand the physical network according to the traffic volume
Case 2: Planned Maintenance
• NO reconfiguration of VRs, NO reconvergence
21
A
B
VR-1
Case 2: Planned Maintenance
• NO reconfiguration of VRs, NO reconvergence
22
A
B
VR-1
Case 2: Planned Maintenance
• NO reconfiguration of VRs, NO reconvergence
23
A
B
VR-1
Case 3: Service Deployment/Evolution
• Move (logical) router to more powerful hardware
24
Case 3: Service Deployment/Evolution
• VROOM guarantees seamless service to existing customers during the migration
25
Virtual Router Migration: Challenges
26
1. Migrate an entire virtual router instance• All control plane & data plane processes / states
Virtual Router Migration: Challenges
27
1. Migrate an entire virtual router instance2. Minimize disruption
• Data plane: millions of packets/sec on a 10Gbps link• Control plane: less strict (with routing message retrans.)
Virtual Router Migration: Challenges
28
1. Migrating an entire virtual router instance2. Minimize disruption3. Link migration
Virtual Router Migration: Challenges
29
1. Migrating an entire virtual router instance2. Minimize disruption3. Link migration
VROOM Architecture
30
Dynamic Interface Binding
Data-Plane Hypervisor
• Key idea: separate the migration of control and data planes
1.Migrate the control plane2.Clone the data plane3.Migrate the links
31
VROOM’s Migration Process
• Leverage virtual server migration techniques• Router image
– Binaries, configuration files, etc.
32
Control-Plane Migration
• Leverage virtual server migration techniques• Router image• Memory
– 1st stage: iterative pre-copy– 2nd stage: stall-and-copy (when the control plane
is “frozen”)
33
Control-Plane Migration
• Leverage virtual server migration techniques• Router image• Memory
34
Control-Plane Migration
Physical router A
Physical router B
DP
CP
• Clone the data plane by repopulation– Enable migration across different data planes– Avoid copying duplicate information
35
Data-Plane Cloning
Physical router A
Physical router B
CP
DP-old
DP-newDP-new
• Data-plane cloning takes time– Installing 250k routes may take several seconds
• Control & old data planes need to be kept “online”• Solution: redirect routing messages through tunnels
36
Remote Control Plane
Physical router A
Physical router B
CP
DP-old
DP-new
• Data-plane cloning takes time– Installing 250k routes takes over 20 seconds
• Control & old data planes need to be kept “online”• Solution: redirect routing messages through tunnels
37
Remote Control Plane
Physical router A
Physical router B
CP
DP-old
DP-new
• Data-plane cloning takes time– Installing 250k routes takes over 20 seconds
• Control & old data planes need to be kept “online”• Solution: redirect routing messages through tunnels
38
Remote Control Plane
Physical router A
Physical router B
CP
DP-old
DP-new
• At the end of data-plane cloning, both data planes are ready to forward traffic
39
Double Data Planes
CP
DP-old
DP-new
• With the double data planes, links can be migrated independently
40
Asynchronous Link Migration
A
CP
DP-old
DP-new
B
• Virtualized operating system– OpenVZ, supports VM migration
• Routing protocols– Quagga software suite
• Packet forwarding– Linux kernel (software), NetFPGA (hardware)
• Router hypervisor– Our extensions for repopulating data plane,
remote control plane, double data planes, …
41
Prototype Implementation
• Experiments in Emulab– On realistic Abilene Internet2 topology
42
Experimental Evaluation
• Data traffic– Linux: modest packet delay due to CPU load– NetFPGA: no packet loss or extra delay
• Routing-protocol messages– Core router migration (OSPF only)
• Inject an unplanned link failure at another router• At most one retransmission of an OSPF message
– Edge router migration (OSPF + BGP)• Control-plane downtime: 3.56 seconds• Within reasonable keep-alive timer intervals
– All routing-protocol adjacencies stay up43
Experimental Results
Where To Migrate
• Physical constraints– Latency
• E.g, NYC to Washington D.C.: 2 msec– Link capacity
• Enough remaining capacity for extra traffic– Platform compatibility
• Routers from different vendors– Router capability
• E.g., number of access control lists (ACLs) supported• Constraints simplify the placement problem
– By limiting the size of the search space
44
Conclusions on VROOM
• VROOM: useful network-management primitive– Separate tight coupling between physical and logical– Simplify management, enable new applications
• Evaluation of prototype– No disruption in packet forwarding– No noticeable disruption in routing protocols
• Future work– Migration scheduling as an optimization problem– Extensions to hypervisor for other applications
45
Greening Backbone Networks: Shutting Off Cables in Bundled Links
Joint work with Will Fisher and Martin Suchara
46
http://www.cs.princeton.edu/~msuchara/publications/GreenNetsBundles.pdf
Power Down Links and Routers?• Larger round-trip time (RTT)• Slow convergence process
47
Bundled Links in Backbone Networks
• Links come in bundles– Incremental upgrades, equipment costs, …– Around 2-20 cables per link
48
Powering All Cables is Wasteful
• Only power the cables that are needed– Reduce energy consumption, without disruption
4949
30-40% utilization
Optimization Problem
• Management-plane optimization problem– Input: network configuration and load– Output: list of powered cables
• Integer linear program
• NP hard need heuristics50
min # powered cabless.t. link loads ≤ capacities
flow conservation carries all traffic demands
Related Tractable Problem
• If energy was proportional to link load?
• Minimize sum of link loads– Rather than the number of powered cables– Leads to a fractional linear program
• Benefits of this problem– Computationally tractable– Upper and lower bound on power saving– Starting point for heuristics
51
First Attempt: Naïve Solution
• Always “round up”
• Up to n times worse where n = # of routers
52
→
Fast Greedy Heuristic
• Solve fractional problem and “round up”
• Identify link with the most “rounding up”• Round down and remove an extra cable• Repeat if a feasible solution exists
53
→
Other heuristics: Explore combinations of links
Experimental Set-Up
• Measure– Energy savings and computational time
• Solving linear program– AMPL/CPLEX
• Varying– Offered load and number of cables
• Topologies– Abilene with measured demands– Waxman graph with synthetic demands
54
Energy Savings in Abilene
• Energy savings depends on the bundle size 55
ener
gy
savi
ng
s (%
)
bundle size
Turn entire link on or off
Similar performance of heuristics
Computation Time
• FGH suited to real-time computation– Reoptimize on/off cables during the day– Other heuristics are expensive for only small gain
56
Conclusion on Bundled Links
• Power down some cables in a bundle– Minimize energy consumption– Without disrupting data traffic
• Design and evaluation of heuristics– Significant energy savings– Low computational complexity– Simple heuristics are quite effective
57
Conclusion of the Talk• Network energy consumption
– Routers consume a lot of energy– Routers are not energy proportional– Selectively powering down is effective
• Two main ideas– New mechanism: virtual router migration– New optimization: identify cables to power down
• Future work– Toward energy-proportional routers– Network designs that minimize server energy 58
Recommended