8
IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SUMMER 2011 | 31 1932-4529/11/$26.00©2011IEEE Date of publication: 8 June 2011 Digital Object Identifier 10.1109/MTS.2011.941329 N. VIJAYKUMAR © ISTOCK S weltering heat — with temperatures reaching 45°C (113°F) and above, and humidity of around 60-70% — are some of the environmental con- ditions exhibited in desert areas in the Middle East, vast stretches of Arizona in the U.S., and belts of the African continent. In such climatic conditions, it becomes especially difficult to operate datacenters. A great deal of power is required to regulate temperature and humidity to run an information technology (IT) infra- structure in the desert. Does this mean such areas are not conducive for build- ing datacenters? On the contrary, there has been an in- creasing trend to build datacenters in desert areas. Do- ing so comes with its own set of design challenges and

Datacenters in the Desert

  • Upload
    n

  • View
    216

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Datacenters in the Desert

IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SUMMER 2011 | 311932-4529/11/$26.00©2011IEEE

Date of publication: 8 June 2011

Digital Object Identifier 10.1109/MTS.2011.941329

N. VIJAYKUMAR

© ISTOCK

Sweltering heat — with temperatures reaching 45°C (113°F) and above, and humidity of around 60-70% — are some of the environmental con-ditions exhibited in desert areas in the Middle East, vast stretches of Arizona in the U.S., and

belts of the African continent. In such climatic conditions, it becomes especially diffi cult to operate datacenters. A great deal of power is required to regulate temperature and humidity to run an information technology (IT) infra-structure in the desert.

Does this mean such areas are not conducive for build-ing datacenters? On the contrary, there has been an in-creasing trend to build datacenters in desert areas. Do-ing so comes with its own set of design challenges and

Page 2: Datacenters in the Desert

32 | IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SUMMER 2011

complexities. For example, what is the best way to maintain the tem-perature within the datacenter at around 20°C (68°F), which may be much lower than the ambient tem-perature outside? What structural modifi cations need to be built in? What techniques should be used to regulate the humidity?

On the plus side, the availabil-ity of real estate, an abundance of nonconventional energy sources (mainly solar and wind), and geo-tectonic stability of the regions are a few of the advantages of such lo-cations. But it is a Herculean task to build and maintain datacenters in such locations — and to make them environmental friendly and green.

Forbes magazine reports that many large corporations and banks have built datacenters in hot area of Arizona in the U.S. Some of these companies building U.S. desert data-centers are: JP Morgan Chase, Unit-ed Airlines, Bank of America, State Farm Insurance, and Toyota [1].

Green DatacentersA datacenter is the nerve center of any IT enterprise. Datacenters have become such an integral part of a business that any risks associated with datacenters are perceived as business risks. Gone are the days when a datacenter was considered a silo and a cost center. Today data-centers are being treated as busi-ness units and profi t centers. With more and more strategic impor-tance being given by business to sustainability and green initiatives,

a datacenter, which is one of the biggest consumers of power, be-comes a prime target for “Greeni-fi cation.” Rapid growth of the IT infrastructure has led to substantial increases in power consumption by datacenters. This in turn triggers additional cooling and power re-quirements to keep the infrastruc-ture up and running.

The supply side challenges are equally alarming. Diminishing sources of reliable energy, reduced availability and increased costs for power, and ever-increasing real estate prices are forcing organiza-tions to look at alternate methods to improve datacenter effi ciencies. With the increased importance of Green IT and carbon emissions, organizations are beginning to re-evaluate strategies for environmen-tal sustainability. Thus was born the Green Datacenter, which seeks to reduce the carbon footprint of an IT powerhouse.

According to a Gartner report, “If ‘greening’ the data center is the goal, power effi ciency is the starting point but not suffi cient on its own” [2]. It is not just reduc-ing power and cooling that makes datacenter green. The project has to be viewed holistically, including the IT infrastructure, management aspects, etc.

Let us fi rst analyze what hap-pens within a datacenter with re -spect to power consumption. A datacenter runs IT equipment (servers, storage), power equip-ment (uninterruptible power sup-ply (UPS), generators), heating, ventilation, and air conditioning (HVAC), and auxiliary equipment such as lighting and access con-trols. All of these consume power and emit heat. Moreover, there are transmission and utilization losses. The success of greening the datacenter lies in making equip-ment run at an optimum, consum-ing less power, emitting less heat, and cutting down losses. This may be achieved by fi ne-tuning exist-ing designs, by redefi ning some

processes, and by replacing some legacy equipment with updated technology that is green by design.

According to Green Grid, a nonprofi t organization seeking to improve energy effi ciency in data-centers, around 40% of total energy consumed in a datacenter goes to-wards cooling and another 24% to-wards power generation [3]. Fig. 1 shows the breakdown.

Hence it is imperative that in order to achieve a green datacen-ter, HVAC and power equipment should be made to run at optimal levels and heat loss should be mini-mized. This directly impacts the Power Usage Effectiveness (PUE) factor which is an international benchmark of how green a data-center is.

Challenges of Building Datacenters in DesertsBuilding datacenters in hot and humid areas has major challenges. Three of the most important are temperature, humidity, and dust.

Extreme Ambient TemperaturesIn the desert summer, the average daytime temperatures range be-tween 40°C and 45°C (104°F-113°F), and peak temperatures are consid-erably higher. Signifi cant cooling power is required to regulate interi-or temperatures. In cooler months, the nighttime temperatures might fall below 20°C (68°F). HVAC systems are used to regulate data-center temperatures. More cool-ing or heating means more power consumed. In either case, HVAC has to be precise to maintain the appropriate temperature within a datacenter.

Higher HumidityRelative humidity (RH) is one of the key environmental factors to be considered for datacenter de-sign. RH measures the amount of moisture in air at a given tem-perature and indicates the dryness of environment. This is a critical factor. A higher level of RH (moist

Where does the energy go?

IT Equipments30%

Power 24%HVAC 41%

Lighting 5%

Fig. 1. Power consumption pattern in a datacenter. Source: Green Grid.

Page 3: Datacenters in the Desert

IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SUMMER 2011 | 33

air) could lead to condensation in-side and outside the infrastructure components leading to corrosion of metallic surfaces and more impor-tantly resulting in short circuits. A lower RH (dry air) causes elec-trostatic discharges (ESD) which might cause shocks and also bring down the equipment.

The optimal and acceptable levels of temperature and RH within a datacenter are as shown in Table I.

It is always recommended that temperatures within the datacenter be kept near 70°F as maintaining a safe RH is practical at this tem-perature. According to the Uptime Institute, improved reliability and longer electronic component life will be achieved by operating the equipment between 68°F (20°C) and 77°F (25°C) and between 40% and 50% RH [4].

Dust Another environmental factor of concern in a desert area is dust. Frequent dust storms in desert en-vironments necessitate that an air-tight environment be maintained within the datacenter fl oor. This requires sealed walls and effi cient ventilation systems that include double-layered, micro-dust fi lters to prevent granular and minute dust particles entering the data-center fl oor. Dust particles not only affect the working of the IT systems but also impact the RH factor. In the face of dust storms, datacenters shut off their (exter-nal) ventilator systems to prevent dust particles from entering the datacenter, and also manually adjust the fi re alarm systems to prevent triggering a (false) alarm. Hence dustproofi ng the datacenter is one of the key challenges for a datacenter in a desert locale.

Dust storms are prevalent in deserts, but also occur in other lo-cations. In 2009, a wave of red dust forced datacenters in Sydney, Aus-tralia, to shut down their ventila-tion systems [5].

Design Considerations in Creating Green DatacentersIn order to tackle extreme condi-tions in the desert and still have the datacenter up and running, a few critical design practices need to be followed. These practices are appli-cable for a normal datacenter, but need to be applied more strictly in the desert. By implementing such measures, energy consumption is optimized, leading to reduced car-bon emissions and hence making the datacenter green.

Temperature Control (Air Conditioning)The complex interconnection of various components like chillers, air handlers, and compressors strives to create an optimal environment to support datacenter requirements. As much as 40% of a data center’s energy bill is from cooling equip-ment. And this fi gure is likely only to increase in the hotter climes of the desert. Hence any optimization in temperature control can result in big savings.

How to Reduce Cooling RequirementsIn-rack cooling: Traditionally, com-puter room air conditioners (CRAC) dotted the perimeter of the datacen-ter fl oor. Since there is a transient of cool and hot air from the CRAC to the server racks and back, there is potential heat loss in this pro-cess. In order to avoid this transient loss, equipment manufacturers have

developed in-rack cooling, which moves the cooling process closer to where heat is generated, on the racks themselves. Having a minia-ture cooling module attached to the server rack reduces transient losses. Moreover, in-rack cooling solutions are highly scalable.

A disadvantage of in-rack cool-ing is that the size of the rack is bound to increase which might take more fl oor space. But the tradeoff is in energy loss which is much more costly. Many enterprise datacenter rack manufacturers have adopted the concept of in-rack cooling to provide customized solutions.

In-row cooling: Another method of effi cient cooling that is adopted in enterprise datacenters is in-row cooling, where the functionality of CRAC is localized to the row of racks with dimensions similar to the actual racks. Again the key is isolating equipment from the hot air, and not forcing the cold air to travel too far back to the CRAC unit. An in-row cooling solution can be used as a supplement to the main CRAC, possibly reducing the number of CRAC and air heating units (AHUs).

Both solutions mentioned above can be used as an excellent heat containment and localization strat-egies within any datacenter. Any reduction of cooling requirements would result in savings in energy consumption and cost. But these ef-fi ciencies are especially important when a datacenter is in a desert.

Table IRecommended Temperature and Humidity Range

Parameter Optimal Acceptable

Temperature 18° C to 27° C (64° F to 80° F)

13° C to 32° C (55°F to 89° F)

Relative Humidity (RH) 25–60% 24–75%

Supply side challenges are forcing organizations to look at alternate locales for their datacenters.

Page 4: Datacenters in the Desert

34 | IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SUMMER 2011

Rack Placement and Air FlowThe placement of racks and enclo-sures plays a crucial role in the way heat emissions are controlled on a datacenter fl oor. In enterprise data-centers, racks and enclosures usu-ally are aligned in straight rows. Energy effi ciencies are improved using the hot aisle and cold aisle concept. It is known that devices emit heat and invariably there will be heat pockets that will be gener-ated within the datacenter. How ef-fectively these hot air packets can be contained and how effi ciently air circulation is handled drives cooling effectiveness. In both cas-es, containment is the key. Temper-ature ranges are contained within small pockets or pods, so that the load on the air conditioning system is minimized.

Cold air enters through the front of the racks, traverses through them, passing the devices, and exits at the rear. Front and rear sides of the racks are compartmentalized to convert 2 rows of racks into a hot/cold aisle. Fig. 2 depicts hot aisle containment.

The hot air blown through the rear is captured within the hot air aisle and sent back to the AHU for cooling. A recent trend in the rack industry is “Hot-Air-In-Row” cool-ing wherein the in-row cooling units are installed at the rear and are compartmentalized along with the hot air aisle. Hence equipment-generated hot air is cooled within the aisle and then sent to the AHUs, thereby reducing the power needed for running AHUs. The same prin-ciple is also applied to the cold air containment aisles.

Apart from air containment, there are various other methods in rack placement that will improve air circulation. Especially in des-ert areas, the height of the plenum under the false ceiling is kept at least ½ to 1 ft more than the regu-lar plenum heights (of around 1.5–2 ft). This is to allow more space to pump in high density cold air to cool the datacenter fl oor.

Humidity Control Combined with temperature con-trol, RH control is a key factor for reducing overall power consump-tion of a datacenter. Humidity sen-sors have to be placed in each row of the rack to monitor moisture lev-els. These humidity sensors are part of the Building Management Sys-tem (BMS), which integrates with HVAC to condense or de-condense air at suitable points within the fl oor. Another key factor to success-fully achieving an optimal RH fac-tor is the ability to detect and relieve hotspots created so that the prob-ability of the fl oor getting heated up and too dry is minimized.

Dustproofi ngInsulating the building is very im-portant for any datacenter and criti-cal for facilities located in desert-like conditions where dust storms are frequent. Minute suspended particles that settle in equipment cause reduced equipment life. Dust and particle sensors should be in-stalled at suitable points within the datacenter fl oor. Also, double lay-ered fi lters in the AHU and CRAC will block dust and suspended particles. It is recommended that ASHRAE class II, 85% fi lters be used in such locations to prevent dust contamination.

Another prevalent concept is the use of a layered building for a data-center located in desert climes. An outer ring of a building (which can be used to house offi ces, monitoring facilities, power equipment, storage etc) with an inner circle of the data-center fl oor, will provide necessary

resistance against dust and suspend-ed particles. Air tight sealing and double glazed doors and windows will add to the protection.

Nonconventional Energy SourcesWith the abundance of alternative sources of energy available in des-ert areas, datacenters located in the desert can tap such resources to re-duce their dependency on tradition-al sources of power, and to reduce the datacenter’s carbon footprint. Though such initiatives are very much in the initial stages, there are many organizations that have taken steps in this direction.

Solar Powered EnergyFacilities in the desert are sur-rounded by abundant solar energy most days of the year. Installing solar panels on datacenter premises taps this potential. Many datacen-ters are moving to installing solar cells (mostly on the rooftops) to power a part of their datacenter energy requirements. For example, i/o datacenters, an infrastructure provider in the U.S., is install-ing massive solar panel arrays at their Phoenix facility. The com-pany expects to generate around 4.5 MW of solar power at their 11-acre complex [6]. But the space requirements for hosting these so-larizers are huge. Photovoltaic pan-els installed roughly on an area of 100 000 ft2 generate around 1 MW of electricity. Fortunately, in desert areas, space often is not a major constraint. Although solar energy is not yet prevalent, its usage in desert datacenters is expected to increase soon.

Solar power means clean energy and very low carbon emissions. But solar power by itself is not likely to be suffi cient for the IT infrastruc-ture of most datacenters in the near future. However, solar can immedi-ately be considered for needs such as lighting and access controls. Lighting consumes around 4–5% of the overall power requirements

Fron

t

Fron

t

ColdAirCold

AirHot AirAisle

Fig. 2. Hot air aisle.

Page 5: Datacenters in the Desert

IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SUMMER 2011 | 35

in a datacenter, which can be met (at least during the day) by solar power. Setting up large solar pan-els requires a substantial initial investment, but the advantages of solar energy over a long period of time, amortizes the initial cost; the payback period is typically 6–7 years. Moreover, with many fed-eral governments subsidizing solar panels, it is only natural that solar energy will be used at an increas-ing rate. Especially in desert areas where sunshine is abundant, even during the colder months, solar could be used to power less-critical equipment.

There are many emerging suc-cess stories about solar energy powered datacenters. For example:

■ Aiso.net, a U.S. hosting and infrastructure provider based in California, claims that their datacenter is completely run with solar energy, powered by on-site solar panels. These panels are owned and operat-ed by Aiso and are completely off the grid [7]. Aiso claims to save around 34 488 pounds of CO2 in a year, because of solar powered energy.

■ Intel has installed solar pan-els at a facility in Rio Ran-cho, New Mexico, to test the potential for using photovol-taic (PV) solar energy to pro-vide power for data centers. The array of 64 Sharp solar panels will generate 10 kW of electricity [8].

LightingAnother revolutionary technol-ogy that can reduce power require-ments for lighting datacenters and offi ces is solar tubes (or skylights). Solar tubes tap natural sunlight to illuminate interiors by a complex mix of refl ection and refraction and using a combination of mirrors, lenses, and photo sensors. Solar tubes work with collection devices mounted on rooftops and pointed towards the sun, transferring sun-light using mirrors and lenses into

a specially crafted duct that illu-minates the work space. There are also rotary collectors that track the movement of the sun and position themselves in direct angle to face the sun.

Such solar lights can eliminate the need for electric bulbs in offi ces during the day, unless it is a cloudy or rainy day. But in deserts, both clouds and rain are conspicuous by their absence and hence solar lights could be very effective. Moreover, solar lights collect only the sunlight and block out heat [9]. AISO.net says that by installing solar lights in their datacenters, they are able to replace 300 watts worth of electric lamps for 8 hr/day, 5 days/week [7].

Using LED lights instead of incandescent or CFL lamps would also save more. LED lights consume less power and emit less heat.

Green Walls and RoofsAnother technique that can be em-ployed to make desert datacenters greener is to have greener exteriors. Adopting a green approach in the exteriors of data center building reduces heat penetration into the datacenter fl oor, thereby reducing cooling requirements. Some of the ways that exteriors can be made green are:

■ Paint the exteriors white. This reduces refl ection and cools the outside air.

■ A “roof-garden” will drasti-cally reduce heat penetra-tion into a building. Having drought resistant plants in the roof/terrace is expected to reduce the cooling require-ments by around 10-13% in hot and humid climates. Green roofs reduce and min-imize thermal gain into the data center.

■ Having a green wall around the datacenter, again with

heat resistant plants and trees, provides a thermal cushion to the datacenter. This greatly reduces heat ra-diation into the fl oor thereby reducing cooling require-ments. But proper insulation has to be implemented so that the moisture from the veg-etation does not seep into the datacenter fl oor.

Regeneration of Heat Datacenters emit a considerable amount of heat. This heat could be transformed and reused for a vari-ety of energy serving purposes.

■ Properly converted, the heat emitted from the datacen-ters could be used to heat the offi ce space during colder months or at nights.

■ The otherwise wasted heat could be used for cooling purposes. The hot air emitted from the devices, captured in the hot-aisle and transmit-ted through heat ducts could be used along with silica gel water adsorption chillers in order to chill water that would be used in AHUs.

■ The heat emitted from the de-vices can be turned into pow-er as well – by using turbines and re-used back into the datacenter, thereby reducing power drawn from the grid.

There is an increasing trend to reuse the heat generated in a data-center for other purposes. Telecity, a datacenter service provider based in Europe has opened a new facil-ity in Paris, which envisages using wasteful energy from the facility to heat a “Climate Change Arbo-retum” built on site [11]. Similarly, the University of Notre Dame has installed a container housing 100 computer servers, located next to the South Bend Botanical Gardens and Greenhouse (BGG). Hot air

Setting up an enterprise datacenter in a desert is not a bad idea.

Page 6: Datacenters in the Desert

36 | IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SUMMER 2011

from the server farm in the con-tainer is ducted into the BGG to warm the desert plants [11].

Wind EnergyAnother alternate source of energy is wind power, which is abundant in open desert areas. Typically an average wind speed of ~10 mph is suffi cient for powering wind turbines, and this wind speed is very much possible in desert areas. Wind power is considered one of the cleanest sources of power. But a disadvantage is the initial cost and space needed to set up a wind-mill farm. Space is not usually a constraint in desert-like areas, but the initial cost is a serious problem. Moreover no single datacenter is likely to build its own dedicated windmill farm. A possible scenar-io is that a windmill farm could be built by a datacenter, but fi nanced in part by supplying some resulting power to a mainstream grid.

Nomadic DatacentersTraditionally enterprise datacen-ters have been located in properly designed and constructed physical structures. More than anything else, the location of these struc-tures was driven by 2 main con-straints: the need to have a very stable and precise range of input power for IT equipment, and a cool operating environment. This has necessitated having precision uninterrupted power supplies for power enterprise class servers and storage equipment. Until recently, there were stringent electrical and temperature norms for IT equip-ment. However, with technologi-cal advancements in product de-sign, specifi cations have become broader. Today servers are capable of running at almost 99% uptime in a voltage range of 180–260 V and an operating temperature of 35°F–90°F (2°C–32°C). The con-sequences are signifi cant:

■ IT equipment can withstand increased power fl uctuation levels and this reduces the load

on a UPS. The more precise the UPS output is required to be, the more input power the UPS draws. The development of equipment capable of oper-ating in a wider power level and able to withstand minor fl uctuations has led to a new generation of UPS devices that consume less power.

■ Servers and storage devices can operate in “hotter” envi-ronments. The latest servers are built to withstand a wider range of ambient tempera-tures. This means that within a datacenter fl oor, temperatures can be relaxed and variations are less of a concern. This reduces the requirements for

precision cooling, and a wider operating temperature means HVAC units consume lesser power.

Due to these technological im-provements, it is possible that data-centers of the future may be able to run either out of a huge warehouse or in a structure as small as a tent. Small datacenters would not re-quire elaborate cooling, and could use natural resources for cooling and temperature control. At least in theory, a small datacenter could achieve a PUE of close of 1.0.

Microsoft conducted a test on operating a datacenter within a tent. Christian Belady and Sean James have conducted a proof of con-cept, where they operated a rack of

Cooling: High-efficiency water-based coolingsystems–less energy-intensive than traditionalchillers–circulate cold water through thecontainers to remove heat, eliminating the needfor air-conditional rooms.

Truckcarryingcontainer

Fig. 3. Datacenter of the future. Source: IEEE Spectrum [13].

Page 7: Datacenters in the Desert

IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SUMMER 2011 | 37

servers in a tent, within a fuel yard. “Inside the tent, we had fi ve HP DL585s running Sandra from No-vember 2007 to June 2008 and we had ZERO failures or 100% uptime” [12]. The basic idea was to prove that a rack of server equipment can be run without any air conditioning equipment and by only using out-side air for cooling and to prove how resilient these servers were.

As small datacenters become in-creasingly practical, a future trend for datacenters will be moving from a single concrete building to a col-lection of container-based units. A datacenter in a container can be easily transported; it is assembled as modular units within a huge shipping container, and is “built-

at-site” with just power. Cooling connections are provided and ready to be operational within hours. Mi-crosoft’s Generation 4 Modular datacenter implements this concept, where they have moved towards container-based datacenters. Ac-cording to Mike Manos, General Manager, Microsoft GFS, “In our Chicago facility, a truck drives up and drops off a 40-foot shipping container, preloaded with 2000 servers. The container is plugged in to Ethernet, power, and cooling, and is ready to power on and go” [14]. A view of a futuristic, datacenter in a container is as shown in Fig. 3.

In another revolutionary move, Google is planning to build data-centers in barges over oceans.

Google has fi led a patent for fl oat-ing datacenters. These would be “water-based data centers” that use the ocean to provide power and cooling. The fl oating data centers would be located 3 to 7 miles from the shore [15]. In fact Google was the fi rst organization, in 2003, to deploy the “datacenter in a contain-er” concept and was also awarded a U.S. patent for a portable a data center in a shipping container.

Best Practices to Implement Energy Efficient Datacenters in the DesertWhile there is no difference in the basic design principles to be fol-lowed for datacenters in a desert area and in a more conventional

Structure: A 24,000-square-meter facilityhouses 400 containers. Delivered by trucks, thecontainers attach to a spine infrastructure thatfeeds network connectivity, power, and water.The data center has no conventional raised floors. Power: Two power substations feed a total of

300 megawatts to the data center, with 200 MWused for computing equipment and 100 MWfor cooling and electrical losses. Batteries andgenerators provide backup power.

Water-basedcooling system

Power and waterdistribution

PowerSupply

Racks ofServers

Container: Each 67.5-cubic-meter container houses2,500 servers, about 10 timesas many as conventionaldata centers pack in thesame space. Each containerintegrates computing,networking, power, andcooling systems.

Page 8: Datacenters in the Desert

38 | IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SUMMER 2011

area, there are subtle variations that need to be enforced to achieve en-ergy effi ciency, given the extremes in environment. Some of the key best practices are described below:

■ Recycle energy as much as possible within the datacen-ter; deploy an energy feed-back loop: This will provide an alternate energy source at a much lower cost.

■ Increase the green cover in the datacenter: Use green walls and rooftop vegetation to provide natural cooling and reduce power consump-tion by cooling equipment.

■ Replace mechanical parts with solid state devices: For instance, UPS equipment has many mechanized parts – Using the latest technology, i.e., solid-state UPS and in-vertors, eliminate mechani-cal parts to reduce heat and transmission losses.

■ Improve airfl ow manage-ment within the datacenter to reduce the load on HVAC equipment: To reduce power consumption and increase ef-fi ciency, concentrate on false plenums, avoid cable chaos, etc., which are typical im-pediments to air fl ow.

■ Place racks and enclosures to reduce carbon emissions: The way racks are arranged and layered matters the most. Make use of hot/cold aisles and con-tainment techniques to reduce hotspots and recycle used air.

■ Evaluate existing IT equip-ment: Replace ineffi cient equipment with newer and lower power consumption devices. Consolidate and vir-tualize servers, reducing rack space, power, and cooling re-quirements, and also reduc-ing management overhead.

■ Rely on Mother Nature: Switch to nonconventional energy sources, where possible, such as solar power, wind power, etc. Such alternative forms of energy are cleaner, and pro-vide an immediate opportuni-ty to reduce carbon emissions.

Green DatacentersAchieving greener datacenters will lower carbon emissions and be more environmentally sustain-able. However this goal is easier said than done. For focused and sustained efforts on the part of organizations to achieve greener datacenters, the challenge lies not in building from the ground up, but making existing datacenters greener. It is a daunting task and especially challenging to make fa-cilities running in extreme climatic conditions greener. However, with maturing technologies and avail-ability of greener products, organi-zations have better options, even in extreme climates.

Green IT and green datacenters will not happen overnight. It is a jour-ney. It is a race – a race against time, and the earlier we act, the better.

Aut hor InformationN. Vijaykumar is a Principal Tech-nology Architect with the Sys-tem Integration Unit at Infosys Technologies, Bangalore, India; [email protected].

References[1] E. Sperling, “Data centers in the des-ert, Forbes.com, June 6, 2009; http://www.forbes.com/2009/06/12/data-centers-des-ert-technology-cio-network-data-centers.html.

[2] Press Release, “Gartner says a green data centre means more than energy efficiency,” Gartner Newsroom, Oct, 20, 2008; http://www.gartner.com/it/page.jsp?id=781012.

[3] Green Grid, “Guidelines for energy ef-ficient datacenters,” Feb. 2007; [http://www.

thegreengrid.org/~/media/WhitePapers/Green_Grid_Guidelines_WP.ashx?lang=en.

[4] Uptime Institute, “Cooling compatibility specification,” draft, vers. 0.3, Apr. 5, 2002; http://uptimeinstitute.org/images/stories /Mission_Critical_Product_Certification/Certified_Products/cooling_compatibility_specification.pdf.

[5] R. Miller, “Sydney data centers weather dust storm,” datacenterknowledge.com, Sept. 23, 2009; http://www.datacenterknowledge.com/archives/2009/09/23/sydney-data-cen-ters-weather-dust-storm/.

[6] “Solar power at data center scale,” datacen-terknowledge.com, June 16, 2009; http://www.datacenterknowledge.com/archives/2009/06/16/solar-power-at-data-center-scale/.

[7] “Sun energized,” AISO.net;http://www.aiso.net/technology-network-sun.html, ac-cessed April 27, 2011.

[8] R. Miller, “Intel testing solar power for data centers,” Data Center Knowledge, Jan. 19, 2009; http://www.datacenterknowledge.com/archives/2009/01/19/intel-testing-solar-power-for-data-centers/.

[9] E. Maitim, “Solar tube lighting - How does it work?,” ezinearticles.com, June 23, 2009; http://ezinearticles.com/?Solar-Tube-Light-ing---How-Does-it-Work?&id=2518192.

[10] “TelecityGroup opens new state-of-the-art data centre in Paris,” TelecityGroup, Jan. 22, 2010; http://www.telecitygroup.com/te-lecitygroup-opens-new-state-of-the-art-data-centre-in-paris.htm.

[11] E.M. Ward Jr., R. Jansen, M. Witkowski, P. Brenner, D.B. Go, “Waste heat recovery us-ing environmentally opportunistic computing at the University of Notre Dame,” Notre Dame Energy Center, 2009-2010 Annual Rep., 2010; http://energy.nd.edu/assets/32624/2009_2010_ndec_annual_report.pdf.

[12] “Intense computing or in tents comput-ing?,” MSDN Blogs, Sept. 19, 2009; http://blogs.msdn.com/b/the_power_of_software/archive/2008/09/19/intense-computing-or-in-tents-computing.aspx.

[13] E. Guizzo, “What will the data center of the future look like?,” Spectrum.ieee.org, June 15,2009; http://spectrum.ieee.org/tech-talk /semiconductors /devices /what-will-the-data-center-of-the-future-look-like.

[14] R. Miller, “Microsoft: PUE of 1.22 for data center containers,” datacenter-knowledge.com, Oct. 20, 2008; http://www.d a t a c e n t e r k n owl e d ge . c o m / a r c h ive s /2008/10/20/microsoft-pue-of-122-for-data-center-containers/

[15] R. Miller, “Google planning offshore data barges,” Data Center Knowledge, Sept. 6, 2008; http://www.datacenterknowledge.com/archives/2008/09/06/google-planning-offshore-data-barges/.