10
2 Published by the IEEE Computer Society 1089-7801/13/$31.00 © 2013 IEEE IEEE INTERNET COMPUTING Sustainable Internet T he information and communica- tions technology (ICT) industry has a broad impact on our econ- omy and society due to the widespread use of ICT services empowered by large- scale social networks. Eco-friendly ICT services are, however, underrepre- sented. Consequently, ICT is currently viewed as a polluter because of its ever- increasing greenhouse gas (GHG) emis- sions. The current approach to dealing with the green ICT problem focuses on reducing energy consumption at the micro level. Unfortunately, an uncon- strained energy-efficiency approach at this level will most likely lead to an overall increase in energy consumption at the macro level, according to Jevon’s paradox. 1 Energy-aware technology is thus becoming one of the most exigent research realms for sustainable ICT. Moreover, the renewable energy indus- try has recently matured to become a utility power generation technology. More than half the total energy gener- ated in Northern Europe comes from renewable sources. However, selling electricity to the power grid can result in waste owing to generation, transmis- sion, and distribution losses. In the US, this waste accounted for more than two thirds of the energy produced in 2008. //reference or URL?// Thus, maximiz- ing onsite renewable power usage — for example, by building ICT facilities close to renewable sources — is a cost-effective solution. Although recent research has con- sidered powering ICT services via renewable energy, 2 experimental mod- els or realizations of energy aware- ness are rare for wide-area testbeds. Today’s information and communications technology (ICT) services emit an increasing amount of greenhouse gases. Carbon footprint models can enable research into ICT energy efficiency and carbon reduction. The GreenStar Network (GSN) testbed is a prototype wide-area network of data centers powered by renewable energy sources. Through their work //developing?// the GSN, the authors have researched fundamental aspects of green ICT such as virtual infrastructure, unified management of compute, network, power, and climate resources, smart power control, and a carbon assessment protocol. Kim-Khoa Nguyen, Mohamed Cheriet, and Mathieu Lemay École de Technologie Supérieure, University of Quebec Michel Savoie and Bobby Ho Communications Research Center Canada Powering a Data Center Network via Renewable Energy: A Green Testbed FPO IC-17-01-Nguy.indd 2 11/20/12 5:49 PM

Powering a Data Center Network via Renewable Energy: A Green Testbed

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Powering a Data Center Network via Renewable Energy: A Green Testbed

2 Published by the IEEE Computer Society 1089-7801/13/$31.00 © 2013 IEEE IEEE INTERNET COMPUTING

Sust

aina

ble

Inte

rnet

T he information and communica-tions technology (ICT) industry has a broad impact on our econ-

omy and society due to the widespread use of ICT services empowered by large-scale social networks. Eco-fr iendly ICT services are, however, underrepre-sented. Consequently, ICT is currently viewed as a polluter because of its ever-increasing greenhouse gas (GHG) emis-sions. The current approach to dealing with the green ICT problem focuses on reducing energy consumption at the micro level. Unfortunately, an uncon-strained energy-efficiency approach at this level will most likely lead to an overall increase in energy consumption at the macro level, according to Jevon’s paradox.1 Energy-aware technology is thus becoming one of the most exigent research realms for sustainable ICT.

Moreover, the renewable energy indus-try has recently matured to become a utility power generation technology. More than half the total energy gener-ated in Northern Europe comes from renewable sources. However, selling electricity to the power grid can result in waste owing to generation, transmis-sion, and distribution losses. In the US, this waste accounted for more than two thirds of the energy produced in 2008. //reference or URL?// Thus, maximiz-ing onsite renewable power usage — for example, by building ICT facilities close to renewable sources — is a cost-effective solution.

Although recent research has con-sidered power ing ICT ser v ices v ia renewable energy,2 experimental mod-els or realizations of energy aware-ness are rare for wide-area testbeds.

Today’s information and communications technology (ICT) services emit an

increasing amount of greenhouse gases. Carbon footprint models can enable

research into ICT energy efficiency and carbon reduction. The GreenStar

Network (GSN) testbed is a prototype wide-area network of data centers

powered by renewable energy sources. Through their work //developing?//

the GSN, the authors have researched fundamental aspects of green ICT such

as virtual infrastructure, unified management of compute, network, power, and

climate resources, smart power control, and a carbon assessment protocol.

Kim-Khoa Nguyen, Mohamed Cheriet, and Mathieu LemayÉcole de Technologie Supérieure, University of Quebec

Michel Savoie and Bobby HoCommunications Research Center Canada

Powering a Data Center Network via Renewable Energy: A Green Testbed

FPO

IC-17-01-Nguy.indd 2 11/20/12 5:49 PM

Page 2: Powering a Data Center Network via Renewable Energy: A Green Testbed

Powering a Data Center Network via Renewable Energy: A Green Testbed

JANUARY/FEBRUARY 2013 3

Embedding power-management techniques into the ICT environment hasn’t matured enough to be commercialized, so corporations aren’t yet willing to risk investment in this area. Although recent high-speed research networks, such as the Global Environment for Network Innovations (GENI) in the US and the GÉANT network in Europe,3 are delivering high-performance com-puting services, they weren’t built with smart power control or environmentally aware fea-tures. Consequently, researchers lack resources for holistically analyzing the ICT carbon foot-print and green strategies.

In this context, the GreenStar Network (GSN) project (http://greenstarnetwork.com) is the first wide-area testbed to realize a green ICT initiative based on the “follow the wind, follow the sun” paradigm. Made with nodes powered almost entirely by renewable energy, the GSN combines local energy efficiency and global power optimization. It virtually migrates data centers between geographically distr ibuted nodes according to the availability of renew-able energy. Here, we address some fundamen-tal issues we faced when developing this green testbed. We examine how the GSN produces and provisions renewable energy for data centers, and describe the infrastructure underlying a large-scale green testbed network and how ser-vice is provided through user slices. We also explain the need for a carbon quantification protocol for ICT with calculation methods. We implemented several techniques and algorithms to maximize green energy usage and network sustainability. Additionally, although the ISO 14064 standard is widely used to measure GHG emissions in traditionally high-polluting industries, specifying it to ICT will require syn-ergistic solutions with regard to power and per-formance measurement, as well as network and system operation. In particular, the standard doesn’t involve mobility techniques that can effectively reduce ICT’s carbon footprint.

Testbed Infrastructure and ServicesCanada’s high-speed Canarie network (www .canarie.ca/en/network/overview) is the cor-nerstone of our GSN solution. Relocating an entire virtual data center to harvest renew-able energy in different geographical locations requires seamlessly transmitting a large volume of data stored in virtual machine (VM) mem-ory to end users. Additional requirements for

the network include elasticity, programmability, and abstraction. Elasticity lets us reorganize the network flexibly when a node is relocated. Programmability refers to network operators’ ability to implement new algorithms to adapt network behavior to intermittent renewable energy sources. Finally, abstraction enables unified resource management. We chose vir-tual network management to build the GSN, because this approach allows different network architectures to coexist and lets us create on-demand independent logical networks on top of different infrastructures.

Network virtualization divides traditional ISPs into two independent entities: the infra-structure provider (InP) manages the physi-cal infrastructure, and the service provider (SP) creates v ir tual networks by aggregat-ing resources from multiple InPs and offers end-to-end services. Each SP leases resources from one or more InPs to create virtual net-works and deploys customized protocols and services, considering each infrastructure’s performance , topology, and cos t . Recent research testbeds such as GENI or the Fed-erated E-infrastructure Dedicated to European Researchers Innovating in Computing Network Architectures (FEDERICA)3 also adopt the vir-tualization approach.

Figure 1a illustrates the GSN’s physical con-nections. The underlying 100-Gbps network is all-optical and has recently achieved a world record for high-speed networks. It comprises two layers — a reconfigurable optical add-drop multiplexer (ROADM) and synchronous optical networking (Sonet) — and provides lightpaths with user-controlled capability. On top of the Canarie network (InP), the GSN (SP) deploys six lightpaths to link distributed data centers. Connection to the US and Europe occurs via the StarLight, GÉANT, and NetherLight net-works. The Canadian section has the largest node deployment (six), linked at layer 2 by opti-cal cross-connects. This section is connected to nodes in Ireland, Iceland, Spain, Belgium, the Netherlands, and the US. Two nodes in China and Egypt are connected on a sporadic basis.

Network virtualization tools — namely Argia, Ether, and Manticore — establish connections between Canadian nodes, as described else-where.4,5 We achieve federation with interna-tional nodes at layer 3 via virtual routers, each of which provides a lightpath tunnel.

IC-17-01-Nguy.indd 3 11/20/12 5:49 PM

Page 3: Powering a Data Center Network via Renewable Energy: A Green Testbed

Sustainable Internet

4 www.computer.org/internet/ IEEE INTERNET COMPUTING

Figure 1. Physical infrastructure of the GreenStar Network (GSN) and its user slices. We can see (a) the physical network connection of nodes on top of the Canarie network and (b) network slices controlled by OpenFlow in the GSN.

Host 1 Host 2

Ethernetswitch

Ethernetswitch

OpenFlowcontroller

OpenFlowcontroller

Migration ofVM 3

Dataplane

Controlplane

After migrationBefore migration

Slice 1 Slice 1Slice 2 Slice 2

VM 1 VM 2 VM 2VM 1 VM 4 VM 5 VM 6 VM 3VM 3 VM 4 VM 5 VM 6

eth0 eth1Virtual switch

(vSwitch)

eth0 eth1Virtual switch

(vSwitch)

eth0 eth1Virtual switch

(vSwitch)

eth0 eth1Virtual switch

(vSwitch)

eth0 eth1Virtual switch

(vSwitch)

eth0 eth1Virtual switch

(vSwitch)

Host 3Host 1 Host 2 Host 3

(a)

(b)

GRC

HP procurve

Calgary

Allied telesis

Cisco6509

Catalyst6509

C3750

Lightpath

Ottawa

Allied telesisAT-8000GS/24

Allied telesisAT-8000GS/24 Allied telesis

AT-8000GS/24UntaggedcircuitArista network

7120-4S

12-116-117-1

153PW

DKIT

EPA

HEAnet, Ireland

160

960

18-112-1

4-4

11-14-1

13-1

4-3

17-113-1

10-1

4-3

5-2 6-1 3-117-1 3-1

OME6500HalifaxOME6500

Montreal

OME6500New York

HDCxMANLAN, New York

GÉANT

ACORNNS

NetherLightCWAVE

SHAW10 Gbps

BCNET

OME6500Ottawa

OME6500Toronto

OME6500Chicago

OME6500Chicago

i2CAT, Spain

San Diego

Kelowna

Cisco UCS

IBBT, Belgium

OME6500Calgary

OME6500Vancouver

Canarie

Montreal Halifax

Cisco 2950

Cisco 2950

Cisco 7606

L2 Switch

L2 SwitchCisco 6509

Cisco nexus

IC-17-01-Nguy.indd 4 11/20/12 5:49 PM

Page 4: Powering a Data Center Network via Renewable Energy: A Green Testbed

Powering a Data Center Network via Renewable Energy: A Green Testbed

JANUARY/FEBRUARY 2013 5

All servers in the GSN are virtualized using hypervisors, and users rent computing power through VMs. The GSN network slice service lets users actively create and manage their VM networks (Figure 1b). Companies such as Ama-zon (http://aws.amazon.com/vpc) have recently offered a similar concept, called the virtual pri-vate cloud (VPC), in the commercial market. However, links in a VPC are fixed when it’s cre-ated, whereas a slice in the GSN is scalable and flexible thanks to a software-defined network (SDN) architecture that uses OpenFlow technol-ogy.6 With clouds, the hypervisor directly links VMs to a physical network interface card (NIC) of the server, which then connects to a physical data center switch. Each server in the GSN, on the other hand, is equipped with a built-in, software-based virtual smart switch (called a vSwitch). Thus, VMs are connected to their vSwitch before they’re connected to the physical switch. This vSwitch lets the network isolate or group VMs running on a server according to user demand.

A software-based OpenFlow controller running on a dedicated VM handles the entire network’s control plane. It controls the vSwitch’s flow tables in such a way that all VMs belonging to a user slice

will be put in a virtual LAN (VLAN), which could span multiple vSwitches. Users configure their slices through a Web-based graphical interface (see http://greenstarnetwork.com) that translates and then relays user requests to the controller through the GSN cloud middleware. When a VM is moved among servers (the migration process is described elsewhere4), the controller dynamically recon-figures vSwitches so that the VM network slice remains unchanged (for example, in Figure 1b, slice 1 is added to the vSwitch in host 3 after a migra-tion). A VM migration takes roughly two minutes on a 10-G//b?//ps connection with no traffic loss. Virtual routers can configure dynamic tunnels when moving VMs between Canadian and interna-tional nodes. We also developed flow-classification algorithms for the controller to provide quality-of-service (QoS) levels to different user categories.

Power and Environmental ControlPhysically, a GSN green node (see Figure 2a) con-sists of the following special-purpose equipment:

• a layer-2 switch,• servers based on the Intel E5500 with virtu-

alization capability,

Figure 2. Physical architecture of a solar node. We can see (a) the equipment used for a solar node and (b) solar panels and the outdoor enclosure at the Ottawa node.

(a) (b)

Raritan PDU +environmental

sensors

Solar panels

9* Salon blue (220W)roof of building

Core router or switch

MATE

IOLANDS1

Sealed inverter(outback inverter)

Charger controller+ sealed batteries Power grid

Blade servers

Allied telesis switch

Data andcontrol planes

Power plane

Outdoorenclosure

IC-17-01-Nguy.indd 5 11/20/12 5:49 PM

Page 5: Powering a Data Center Network via Renewable Energy: A Green Testbed

Sustainable Internet

6 www.computer.org/internet/ IEEE INTERNET COMPUTING

• a power distribution unit (PDU) with associ-ated temperature and humidity sensors, and

• a solar-powered system (SPS) or wind- powered system (WPS).

The SPS includes solar panels, a charge control-ler, an inverter, and battery banks. In the WPS, the solar panels are replaced with a wind tur-bine. Remote control and monitoring is required for both SPS and WPS. The Outback MATE communications device controls the solar bat-tery and the inverter — for example, it switches between solar power and the power grid. The Perle IOLAN server connects serial ports to Eth-ernet ports, letting computerized applications control power devices. Both MATE and IOLAN are software-virtualized and exported as cloud resources in this project.

A node’s logical architecture comprises three planes: the power plane deals with power gener-ation and provisioning, the data plane provides data transfer and hosting services, and the con-trol plane includes monitoring, reporting, and signaling functions. We use cloud middleware to manage these planes in a unified fashion.5

Nodes are hosted on a rack-mount structure in an outdoor, climate-controlled enclosure equipped with an air conditioner for summer and a heater for winter (see Figure 2b). A key issue this enclosure faces is keeping humidity inside the container within acceptable levels. Solutions include an additional ventilation sys-tem or occasionally triggering the heater.

To collect power and environmental data, we virtualize each node’s physical device using a software tool and then represent it as a cloud resource.5 These resource instances communi-cate with devices through the Telnet, Secure Shell (SSH), and Simple Network Management (SNMP) protocols, parse commands, and decide when to perform appropriate actions. The virtual-ization approach lets other resources or services use a given resource, enabling auto-management processes.

Although all GSN nodes have power gen-erators, renewable energy wasn’t intended to be their only power source. Indeed, the GSN is predicated on the assumption that renewable power availability will wax and wane in rela-tion to the sun and wind. The GSN achieves con-tinuous data center operation services, despite fluctuating power availability using controlled VM migration between nodes. For example,

when solar power dwindles in a node, the net-work relocates services to other nodes; if, for any reason, this migration fails or occurs too late, the node will switch to the power grid so that these VMs will continue to run normally until they can be migrated.

We implemented automated algorithms to make decisions on dynamically migrating and consolidating VMs among servers within data centers to meet the workload requirements while maximizing renewable energy usage. These algorithms address several key issues, including when to trigger VM migrations and how to select alternative hosts to achieve optimal VM place-ment and avoid service outages.

Carbon Assessment: Measurements and ReportingA key challenge green testbeds face is how to accurately quantify and report GHG reductions. The GSN determines reductions resulting from both renewable energy usage and VM relocation by processing data at two levels. The first level, called local fast data acquisition and collec-tion, involves physical, cloud-resource-controlled devices. The GSN collects power consumption data from the PDU using highly accurate power, temperature, and humidity sensors. The MATE device reports power generation from solar pan-els and wind turbines. Each cloud resource con-trols a physical device and measures a set of environmental metrics, as Table 1 shows.

Using cloud resources (such as compute resources for the server or network resources for a network element), the GSN can measure and control metrics such as CPU, memory, virtualiza-tion capacity, VM metrics, VPN, and bandwidth. Unlike existing middleware in the market — such as OpenNebula or OpenStack, which focus on the IP network — the GSN’s network resources cover metrics at all three network layers.

The second level of data processing calculates power consumption at a lower granularity — namely, VMs and migration traff ic. We can derive the power consumption of a VM running on a host from the host’s capacity and power and the VM’s resource requirements, as follows:

α β γ=

×+×

×P

MM

UU

OO

P ,VV

H

V

H

V

HH

(1)

where PH is the host’s maximum power con-sumption (measured by PDU resource using

IC-17-01-Nguy.indd 6 11/20/12 5:49 PM

Page 6: Powering a Data Center Network via Renewable Energy: A Green Testbed

Powering a Data Center Network via Renewable Energy: A Green Testbed

JANUARY/FEBRUARY 2013 7

stress tools); MV, UV, and OV are memory, the num-ber of CPUs, and the VM’s I/O capacity, respec-tively; MH, UH, and OH are memory capacity, CPU, and the host’s I/O (given by compute resource), respectively; and α, β, and γ are three constants for memory, CPU, and I/O power consumption.7

In reality, a ser ver is powered off (on standby) if it doesn’t host any VMs. This action consumes a power PHOff. When the network moves a VM to a server that is off, a power PHOn is required to restart the server. Both PHOff and PHOn are measured by PDU resource and added to each server’s total consumption according to the server’s status. Similarly, when all of a data center’s hosts are off, a power PDOff is con-sumed to turn off the data center (including its networking gear and cooling). A power PDOn is required to restart the data center.

We estimate the power consumed by the underlying network for a VM migration between two nodes through bandwidth capac-ity, the power of core switches between the nodes, and the VM’s size (in memory). Indeed, data centers in the GSN are connected directly to core switches without passing an access net-work. Assuming that all switches are the same type (that is, the same as those in the Canarie network), we can derive a migration’s power consumption as follows:8

∑µ= × ×

P

MB

P2

,mV

S

LS

1

(2)

where L is the number of switches between two nodes (determined by the network manager);

MV is the VM’s size (measured by the compute resource); BS and PS are bandwidth capacity and a switch’s power (given by the network man-ager), respectively; and factor µ accounts for the power requirements for cooling and redun-dancy, which are provided with switch con-figurations. Note that current switches aren’t load-proportional, so their consumption is con-stant even if they run under their full switching capacity.

The carbon footprint of a data center, the net-work, or a user slice is the product of the power consumption and the energy source’s emission factor (that is, tons of CO2 per kWh). The emis-sion factor of sources, including the power grid in Canada and worldwide, is available else-where.9 Note that //our?// research focuses only on the project’s operational phase. Footprints also result from the project’s manufacturing and retirement phases, which we’ll consider in future work.

Based on the aforementioned calculations, the Canadian Standards Association built and released the GSN Carbon Measurement Proto-col10 to help in reporting and verifying emis-sion reductions resulting from the delivery of low GHG emissions associated with ICT ser-vices. This protocol’s scope involves quanti-fying emission reductions that ICT services achieve, either by moving to a lower-carbon environment or improving workload efficiency. It defines two types of application ICT project activities. The first type includes activities wherein an ICT facility reduces emissions by changing its environment, such as improving

Table 1. Environmental metrics (an incomplete set) measured by cloud resources.

Metric name Description

Power distribution unit (PDU) resource

Outlet current Electric current of a PDU outlet (consumed by electrical equipment)

Outlet voltage Voltage of a PDU outlet

Outlet status Status of a PDU outlet (on/off)

Power resource

AC mode Whether the battery is selling or buying electricity to/from the grid

Photovoltaics //correct?// (PV)/ wind current

Generated electric current of the solar PV or wind turbine

PV/wind voltage Generated voltage of the solar PV or wind turbine

Battery voltage Solar/wind battery voltage

Climate resource

Temperature Temperature in the enclosure (Celsius)

Humidity Humidity in the enclosure (%)

IC-17-01-Nguy.indd 7 11/20/12 5:49 PM

Page 7: Powering a Data Center Network via Renewable Energy: A Green Testbed

Sustainable Internet

8 www.computer.org/internet/ IEEE INTERNET COMPUTING

efficiency or changing the energy source to renewable energy. The second type comprises activities that achieve reductions by improving ICT service delivery — for example, by migrat-ing services from one environment to another.

To i l lust rate the repor t ing process, we investigate a use case in which an ICT project reduced emissions from a scientific application, called GeoChronos,4 from June to September 2011 through an increase in workload efficiency and migration to a green node (type 2). Initially, GeoChronos operated on eight blade servers with 480 Gbytes of RAM and consumed a total power of 500 W/h in the Calgary data center. The GSN achieved real emission reductions by moving the electricity-powered GeoChronos application from the Calgary coal-powered grid to a data center in Kelowna that’s powered by hydroelectricity. According to the protocol, we can estimate emission reductions as the differ-ence between the project and the baseline. The baseline is the hypothetical case in which we assume that the application is running at the ini-tial location (that is, Calgary) during the project period. The protocol requires that we report the following measurements:

• the amount of energy used by equipment at Calgary (using Equation 1),

• the emission factor9 of electricity from the grid in Calgary,

• the data center’s power usage effectiveness (PUE),10

• the emission factor8 of the power source (hydroelectricity) in Kelowna, and

• the amount of power the underlying network uses between Calgary and Kelowna (using Equation 2).

We calculate the emission reductions to be 2,412.63 kg of CO2 over the 13-week project, which is lower than our initial estimation,4 prob-ably because of the data center’s high PUE (> 2).

Simulations and ExperimentsResearchers conducted simulations and real exper iments to demonstrate effect ive car-bon reductions as regards the algorithms on the testbed, the feasibility of a renewable-energy- powered testbed, and the “follow the wind, fol-low the sun” paradigm.

Figure 3a illustrates a simulation scenario with 13 data centers located in seven cities

around the world, hosting VMs under sun movement and under the random movement of several wind streams (see http://greenstarnetwork .com).11 The simulation imported renewable power-generation patterns from data collected in GSN nodes. The GSN team formulated the carbon-reduction problem for local data center consolidation and global network optimiza-tion. We propose genetic and best-fit-decreasing algorithms to solve the problem, which is proven NP-hard with multiple metrics. Simula-tions point out that the migration-based WAN optimization gives a 59 percent greater carbon reduction compared to local data center consoli-dation.11 Together with the use case presented in the previous section, our simulations and cal-culations show significant emission reductions thanks to the testbed.

We also performed real-time experiments to demonstrate that data center service can be entirely powered by renewable energy for up to three months. Figure 3b shows the electricity generated by a solar photovoltaic (PV) system in Ottawa during the experimental period. Figure 3c shows service states if no relocation occurs. The “ON” state indicates that service is powered by solar energy, while the “OFF” state means that it takes electricity from the grid. As the figure illustrates, the service must frequently switch from the solar PV to the grid during winter. Figure 3d shows service states when VMs are relocated (using a best-fit-decreasing algorithm). Because the virtual data center has migrated to alternate nodes when solar energy dwindles in Ottawa, the period that service is powered by renewable energy increases significantly com-pared to Figure 3c. However, the power grid is still needed during the second half of January. This suggests that using only solar PV isn’t rel-evant during the winter. A better solution would be to use solar-wind mixed generators. Such an experiment shows the potential for powering data center service permanently via renewable energy.

We also conducted an intensive experiment in a one-month period (March 2012) with three nodes in the Canadian section of the GSN to validate the “follow the sun” paradigm. Results show that daily migrations usually trigger at about 10 p.m. to move VMs from Ottawa to Cal-gary (both powered by solar energy), and then at midnight to Montreal (powered by hydro-electricity). VMs move back from Montreal to

IC-17-01-Nguy.indd 8 11/20/12 5:49 PM

Page 8: Powering a Data Center Network via Renewable Energy: A Green Testbed

Powering a Data Center Network via Renewable Energy: A Green Testbed

JANUARY/FEBRUARY 2013 9

Figure 3. Power and data center state from October 2010 to January 2011. We can see (a) our live simulation; (b) the electrical current generated by the solar photovoltaics (PV) at the Ottawa node; (c) the data center service without relocation; and (d) the data center service with relocation (ON: solar energy used; OFF: power grid used).

Time

2010-10-1219:15

2010-10-2008:10

2010-10-2822:29

2010-11-0302:57

2010-11-1218:57

Time

2010-12-1404:34

2011-01-1207:03

2011-01-1717:34

2011-01-2615:04

Stat

e

On

Off

2010-10-1219:15

2010-10-2008:10

2010-10-2822:29

2010-11-0302:57

2010-11-1218:57Time

2010-12-1404:34

2011-01-1207:03

2011-01-1717:34

2011-01-2615:04

Stat

e

On

Off

2010-10-1219:15

2010-10-2008:10

2010-10-2822:29

2010-11-0302:57

2010-11-1218:57

2010-12-1404:34

2011-01-1207:03

2011-01-1717:34

2011-01-2615:04

Elec

tric

al c

urre

nt

0

10

20

30

40

50

60Amp

Data center(solar powered)

Data center(hydro powered)

Data center(wind powered)Night

(a)

(c)

(b)

(d)

Day Night

IC-17-01-Nguy.indd 9 11/20/12 5:49 PM

Page 9: Powering a Data Center Network via Renewable Energy: A Green Testbed

Sustainable Internet

10 www.computer.org/internet/ IEEE INTERNET COMPUTING

Ottawa at around 9 a.m. This almost coincides with the difference between the three time zones, if we consider the battery’s charging and discharging time. According to Environment Canada, in March, sunset is at roughly 7 p.m., but electricity in the battery still keeps a data center functional for three hours (until 10 p.m.); sunrise is at about 7 a.m., but we need up to two hours (until 9 a.m.) to get sufficient sun-light power and fully recharge the battery. This experiment thus demonstrates that the “follow the sun” paradigm worked as expected.

Note that in this project, we ignore the cost of the WAN that interconnects the data centers. However, as next-generation networks such as Internet2 or GÉANT grow, the portion of net-working in the overall cost model will likely shrink, making the green data center network realistic. VM migrations would also be costly, especially for high-latency networks. So, our algorithm attempts to minimize the migrations in the network by keeping VMs running on data centers as long as possible.

Network traffic would also increase as VMs are relocated, resulting in an increase in power consumption. However, because optical net-works consume much less power than data cen-ters,5 overall emissions are effectively reduced when VMs are relocated to greener data cen-ters. Keeping data centers off when renewable sources (wind or sun) aren’t available could be

economically inefficient. Nevertheless, recent research determined that a majority of servers in current data centers operate at only 4 percent average utilization.12 Thus, if all renewable-energy-powered nodes run at their full capac-ity, our testbed would be acceptably efficient. In addition, if users are concerned about QoS when VMs are relocated, GSN cloud middleware lets them mark a “fix” label on their VMs. Such VMs aren’t considered for relocation, and are therefore powered by the grid when a renew-able energy source isn’t available. In reality, our experiments with a high-quality video-streaming application show negligible QoS degradation when moving VMs from Europe to Canada over the GSN.

T eamed with worldwide research, the GSN provides a scalable environment for mea-

suring ICT services’ GHG emissions, as well as experimenting and simulating carbon-reduction algorithms. We recently witnessed the success-ful launch of several international collaborations with GSN at their core, including the European Seventh Program Framework (FP7) Manty-chore project and the US GreenLight and the New York State Energy Research and Devel-opment Authority (NYSERDA) projects (see the “Research into Green ICT’ sidebar for other examples). In the next phase of the GSN project,

Research into Green ICT

Since 2011, large-scale experiments and simulations have been carried out in the GreenStar Network (GSN)

related to networking, measurements, and carbon optimiza-tion. Researchers have reported various results, such as vir-tualization techniques for the mobility paradigm,1 theoretical simulations for power consumption,2 and intercontinental con-nections and cloud middleware.3

Note that, along with greening information and communi-cations technology (ICT) services through data centers, recent research is also interested in greening networks, with respect to energy efficiency in traffic processing. For example, various projects undertaken at Berkeley Labs4 deal with greening Eth-ernet by adapting link rates to effective traffic levels because higher data rates require dramatically more power. Europe’s ECONET (low Energy Consumption Networks) project aims to aggressively modulate power consumption in network devices according to actual workloads and service require-ments. However, the portion of power that optical core networks consume is smaller than data center consumption

(servers, in particular) for the same volume of data processed.5 At the latest ITU Symposium on ICT, the Environment, and Climate Change, we saw many efforts toward greening data centers; among them, the GSN is the only initiative that addresses a large-scale data center network.

References1. M. Lemay et al., “Convergence of Cloud Computing and Network Virtu-

alization: Towards a Neutral Carbon Network,” IEEE Internet Computing,

vol. 16, no. 6, 2012.

2. F.F. Moghaddam et al., “Low Carbon Virtual Private Clouds,” Proc. IEEE

Cloud Computing Conf., IEEE, 2011, pp. 259–266.

3. K.K. Nguyen et al., “Environmental-Aware Virtual Data Center Network,”

J. Computer Networks, vol. 56, no. 10, 2012, pp. 2538–2550.

4. C. Gunaratne et al., “Reducing the Energy Consumption of Ethernet with

Adaptive Link Rate (ALR),” IEEE Trans. Computers, vol. 57, no. 4, 2008,

pp. 448–461.

5. J. Baliga et al., “Green Cloud Computing: Balancing Energy in Processing,

Storage, and Transport,” Proc. IEEE, vol. 99, no. 1, 2011, pp. 149–167.

IC-17-01-Nguy.indd 10 11/20/12 5:49 PM

Page 10: Powering a Data Center Network via Renewable Energy: A Green Testbed

Powering a Data Center Network via Renewable Energy: A Green Testbed

JANUARY/FEBRUARY 2013 11

we’ll consider the carbon footprint and QoS/quality-of-experience (QoE) models of large-scale applications, such as telecommunications- grade services.

AcknowledgmentsThis research is funded by Canarie. We thank all members

and partners for their contributions.

References1. J. Polimeni et al., “The Jevons Paradox and the Myth of

Resource Efficiency Improvements,” Earthscan, 2008.

2. W. Van Heddeghem et al., “Distributed Computing

for Carbon Footprint Reduction by Exploiting Low-

Footprint Energy Availability,” J. Future Generation

Computer Systems, vol. 28, no. 2, 2012, pp. 405–414.

3. P. Szegedi et al., “With Evolution for Revolution: Man-

aging FEDERICA for Future Internet Research,” IEEE

Comm., vol. 47, no. 7, 2009, pp. 34–39.

4. M. Lemay et al., “Toward a Zero-Carbon Network: Con-

verging Cloud Computing and Network Virtualization,”

IEEE Internet Computing, vol. 16, no. 6, 2012, pp. 51–59.

5. K.K. Nguyen et al., “Environmental-Aware Virtual

Data Center Network,” J. Computer Networks, vol. 56,

no. 10, 2012, pp. 2538–2550.

6. N. McKeown et al., “OpenFlow: Enabling Innovation in

Campus Networks,” ACM SIGCOMM Computer Comm.

Rev., vol. 38, no. 2, 2008, pp. 69–74.

7. A. Kansal et al., “Virtual Machine Power Metering

and Provisioning,” Proc. ACM Symp. Cloud Computing,

ACM, 2010, pp. 39–50.

8. J. Baliga et al., “Energy Consumption in Optical IP Net-

works,” J. Lightwave Technology, vol. 27, no. 13, 2009,

pp. 2391–2403.

9. “Emission Factors from Canada’s Greenhouse Gas

Inventory,” Environment Canada, 2010; http://ec.gc.ca.

10. “ICT Protocol — Version 1: ICT Greenhouse Gas Reduc-

tion Project Protocol: Quantification and Reporting,”

CSA Group, 2012; http://csa.ca.

11. F.F. Moghaddam et al., “Low Carbon Virtual Private

Clouds,” Proc. IEEE Cloud Computing Conf., IEEE, 2011,

pp. 259–266.

12. “Green Procurement Initiative: Nine Lessons in Green-

ing IT,” Greening Greater Toronto Green Exchange,

2011; www.greeninggreatertoronto.ca.

Kim-Khoa Nguyen is a research fellow at École de Tech-

nologie Supérieure, University of Quebec. His research

includes green information and communications tech-

nology (ICT), cloud computing, the smart grid, router

architecture, and wireless networking. Nguyen has

a PhD in electrical and computer engineering from

Concordia University. He’s the key architect of the

GreenStar Network project and a member of the Syn-

chromedia Consortium. Contact him at knguyen@

synchromedia.ca.

Mohamed Cheriet is a full professor in the Automa-

tion Engineering Department at École de Technolo-

gie Supérieure, University of Quebec. His expertise

includes document image analysis, OCR, mathemati-

cal models for image processing, pattern classification

models, and learning algorithms, as well as percep-

tion in computer vision. Cheriet has a PhD in computer

science from the University of Pierre et Marie Curie.

He cofounded the Laboratory for Imagery, Vision, and

Artificial Intelligence (LIVIA) and founded and directs

the SYNCHROMEDIA Consortium (Multimedia Com-

munication in Telepresence) at the University of Quebec.

Contact him at [email protected].

Mathieu Lemay is a PhD candidate in the Synchromedia

Consortium at École de Technologies Supérieure (ETS),

University of Quebec, and is the founder, president,

and CEO of Inocybe Technologies. His main research

interests are in virtualization, network segmentation,

service-oriented architectures, and distributed sys-

tems. Lemay has an MS in optical networks from ETS.

He’s currently involved in green IT and leads the infra-

structure as a service (IaaS) Framework open source

initiative. Contact him at [email protected].

Michel Savoie is the research program manager for the

Broadband Applications and Optical Networks group of

the Broadband Network Technologies Research Branch

at the Communications Research Center Canada (CRC).

His expertise is in broadband systems and related tech-

nologies such as IaaS, green ICT, application-oriented

networking, advanced IP, and WDM-based optical

networks. Savoie has an MSc in electrical engineering

from the University of New Brunswick. He’s involved

with the High-Performance Digital Media Network

(HPDMnet) research initiative and the GreenStar Net-

work project. Contact him at [email protected].

Bobby Ho is a research engineer with the Broadband Appli-

cations and Optical Networks group at the Communica-

tions Research Center Canada, where he’s responsible

for operating and maintaining the Broadband Appli-

cations and Demonstration Laboratory (BADLAB), and

for setting up and conducting R&D experiments for

demonstrating and evaluating emerging telecommuni-

cations and networking protocols and technologies. Ho

has a //highest degree?// in //subject?// from //institu-tion?//. He’s a technical team member of the GreenStar

Network project. Contact him at [email protected].

IC-17-01-Nguy.indd 11 11/20/12 5:49 PM