99
Special Report Data Center Energy Efficiency Guide Sponsored By:

Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 2: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

2

Introduction

Data center managers have battled a growing power bill for the past several years, but changing economic, social and regulatory factors are making data center energy efficiency even more important. In recent years, companies fed the energy consumption beast by building new data centers to provide additional raw power for an increasing number of servers. But in this down economy and tight capital market, IT executives are forced to make do with the resources they have -- and that means squeezing more efficiency out of data centers. Also, the public sector and governmental agencies are becoming increasingly aware that data centers are energy hogs. High-profile data centers from public-facing Internet companies have gotten the most attention. Facebook's recent data center build-out caught the attention of the national media (and criticism from environmental organizations) for choosing a utility with primarily coal-based power generation. While many IT shops operate low-profile data centers safe from public scrutiny, none will be able to avoid coming federal regulation. For many industries, data centers are one of the largest sources of greenhouse gas emissions. Governmental agencies, including the Department of Energy and the Securities and Exchange Commission, are becoming more involved in tracking data center energy use. Companies may be required to report data center carbon emissions in the near future. The following content was created by SearchDataCenter.com's experts and editorial team to help you make the case for data center efficiency to your executives and to offer a roadmap for improving data center efficiency.

Page 3: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

Optimize your cash flow and efficiency with our scalable, flexible, and adaptable InfraStruxure

Because right-sized data center infrastructure is good business strategy

1

2

3

4 5

Cooling. Rack-, row-, and room-based cool-ing options, including new overhead InRow™ cooling units, for greater efficiency

Management. End-to-end monitoring and management software for greater efficiency and availability

Physical Security. A single-seat view for monitoring and surveillance across the facility.

Power. Modular power distribution and parallelling capabilities on UPS for loads from 10 kW to 2 MW

Rack Systems. Any-IT vendor-compatible rack enclosures and accessories for high densities

1

2

3

4

5

The only right-sized data center architecture engineered as a system

Introducing Next Generation InfraStruxure Whether you just acquired a new company or must increase its ever-expanding customer or inventory database capacity, you’re most likely facing pressing demands on your company’s IT infrastructure. Your existing data center infrastructure may not be able to handle these up-to-the-minute changes. That’s where APC by Schneider Electric™ steps in with its proven high-performance, scalable, data center infrastructure. As the industry’s one-of-a-kind, truly modular, adaptable, and “on-demand” data center system, only InfraStruxure™ ensures that your data center can adapt effectively, efficiently, and, perhaps most important, quickly, to business changes. InfraStruxure data centers mean business We say that InfraStruxure data centers mean business. But what does that mean to you? The answer is simple. A data center means business when it is always available, 24/7/365, and performs at the highest level at all times, is able to grow at the breakneck speed of business, lets you add capacity without waiting on logistical delays (e.g., work orders), enables IT and facilities to keep pace with the business in a synchronized way, continues to achieve greater and greater energy efficiency—from planning through operations, is able to grow with the business itself, and supports—instead of hinders—business. The triple promise of InfraStruxure deployment InfraStruxure fulfills our triple promise of superior quality, which ensures highest availability; speed, which ensures easy and quick alignment of IT to business needs; and cost savings based on energy efficiency. What better way to “mean business” than to enable quality, speed, and cost savings—simultaneously?

Discover which physical infrastructure management tools you need to operate your data center… download White Paper #104, “Classification of Data Center Operations Technology (OT) Management Tools,” today!Visit www.apc.com/promo Key Code b542v • Call 888-289-APCC x9809 • Fax 401-788-2797

©2011 Schneider Electric. All Rights Reserved. Schneider Electric, APC, InRow, and InfraStruxure are trademarks owned by Schneider Electric Industries SAS or its affiliated companies.e-mail: [email protected] • 132 Fairgrounds Road, West Kingston, RI 02892 USA • 998-5038_US

Classification of Data Center Operations Technology (OT) Management Tools

Contents 1

2

7

7

9

10

> Executive summary

Page 4: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

4

Table of Contents

Introduction ...................................................................................................................................... 2

The Case for Data Center Energy Efficiency Now ................................................................................. 6

i. SEC Ruling Could Spur Data Center Change ..................................................................................... 6 ii. New Federal Data Center Energy Efficiency Guidelines on Tap ...................................................... 8

iii. Data Center Efficiency: Which Tactics are Worth the Cost ........................................................... 10

Server Efficiency .............................................................................................................................. 13

i. Will Energy Star Servers Give Your Business a Positive ROI? ........................................................ 13 ii. Optimizing Server Energy Efficiency .............................................................................................. 17

iii. How Server Consolidation Benefits Your Data Center ................................................................... 25 iv. Measuring Server Energy Efficiency............................................................................................... 26 v. EPA Releases Energy Star Server Specification .............................................................................. 28

vi. Data Center Managers Indifferent to Energy Star for Servers ....................................................... 30

Energy Efficient Data Center Cooling................................................................................................. 32

i. Air Flow Management Strategies for Efficient Data Center Cooling ............................................. 32 ii. Lowering Data Center Cooling Costs with Airflow Modeling and Perforated

Raised-Floor Tiles ........................................................................................................................... 36 iii. United Parcel Service's Tier 4 Data Center Goes Green ................................................................ 39 iv. Green UPS Tier IV Data Center Water-Side Economizers .............................................................. 41 v. Data Center Hot-Aisle/Cold-Aisle Containment How-Tos ............................................................. 44

vi. Cleaning Under the Raised-Floor Plenum: Data Center Maintenance Basics ............................... 48 vii. Block Those Holes! ........................................................................................................................ 50

viii. Sizing Computer Room Air Conditioners for Data Center Energy Efficiency ................................. 52 ix. Can CFD Modeling Save Your Data Center? .................................................................................. 55 x. When Best Practices Aren't: CFD Analysis Forces Data Center Cooling Redesign ......................... 58

Energy Efficient Backup Power and Power Distribution ..................................................................... 61

i. Which Data Center Power Distribution Voltage Should You Use? ............................................... 61 ii. DC Power in the Data Center: A Viable Option? ........................................................................... 66

iii. The Value of DC Power in Data Centers Still in Question .............................................................. 70 iv. Does Data Center Uptime Affect Energy Efficiency? .................................................................... 72 v. Will a Transformerless UPS Work for Your Data Center? ............................................................. 74

vi. How to Choose the Right Uninterruptible Power Supply for Your Data Center ............................ 79 vii. Using Flywheel Power for Data Center Uninterruptible Power Supply Backup ............................ 83

Page 5: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

5

Data Center Efficiency Metrics and Measurements ........................................................................... 87

i. Green Grid Hones PUE Data Center Efficiency Metric ................................................................... 87 ii. Measuring Data Center Energy Consumption in Watts per Logical Image.................................... 89

iii. In Measuring Data Center Power Use, More (info) is More .......................................................... 91 iv. Measuring Energy Leakage: Catching Up with the Colos .............................................................. 92 v. Using Chargeback to Reduce Data Center Power Consumption: Five Steps ................................. 93

vi. The TPC Energy Specification: Energy Consumption vs. Performance and Costs ......................... 96

Resources from Schneider Electric…………………………………………………………………………………....................... 99

Page 6: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

6

The Case of Data Center Energy Efficiency Now

SEC Ruling Could Spur Data Center Change

By: Mark Fontecchio

As if data center pros didn't have enough to worry about, a recent Securities and

Exchange Commission ruling could affect how they do their jobs.

In February the Securities and Exchange Commission published a ruling on climate change that clarified reporting requirements it expects public companies to make regarding climate change legislation. The SEC says that there has already been significant federal and state regulation around climate change. One example is the Environmental Protection Agency's requirement starting this year that large producers of greenhouse gases must collect and report data on their greenhouse gas emissions.

The ruling also indicates there will likely be more climate change legislation, such as a carbon cap-and-trade or tax program. Those changes could affect a company's financial performance, and the SEC wants to ensure companies are aware of the reporting requirements.

Though it's still early, data centers have begun to prep for climate change legislation.

"[We just want] to make sure that we're not caught off-guard as an industry," said Chris Crosby, senior vice president at giant data center real estate company Digital Realty Trust. "If the broad brush on carbon emissions comes into play, it's going to affect everyone."

Making data center energy efficiency matter

The data center industry must stay in front of the issue and work to differentiate itself

from other industries, Crosby said. Along that line, data center leaders have discussed

forming a data center lobbying organization to make their case.

"I think in all these cases, data centers get into this broad swath and grouped with smelting plants," he said. "We need to promote the positive impact that IT has from a holistic perspective. If I don't have to drive to the bank because I have online banking, what is the benefit there?"

John Stanley, a research analyst at the 451 Group, recently wrote about the possible effect of the SEC ruling on data centers.

Page 7: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

7

"I think the biggest impact is going to be that it will make upper management pay attention to climate change-related risks in a way that maybe they didn't before," Stanley said. In the past, it has been facilities managers who were concerned about electricity prices.

But climate change legislation might create higher energy costs. Along with the recent SEC ruling, the trend could make corporate executives watch companywide energy use more closely.

The Uptime Institute recently found that data center energy use accounts for a large chunk of a company's total energy consumption. For one financial company, data centers consumed one-third of the company's total energy consumption.

With that in mind, Stanley suggested data center managers (1) become more energy efficient; and (2) become able to justify the business case of certain inefficiencies.

The latter task could be tough. A corporate executive might look at a data center from a nontechnical point of view, see a bunch of redundant equipment, and deem that it needs to be shut off. But that equipment might be important to data center uptime and meeting business service-level agreements.

"Data center managers needs to say, here are places where we do use energy, and here is why it's important to spend energy here even though it looks like waste," Stanley said.

Some companies already do that, in part to save money on energy, and in anticipation of regulations. Ron Pepin, the VP and general manager of data center operations at Pittsburgh-based PNC Financial Services uses tools such as HP OpenView and Nlyte Software to decommission unused or underutilized servers and automatically calculate power usage effectiveness, or PUE to be more energy efficient. In his view, the government will probably require businesses to show that their data center operations have become more energy efficient. "The EPA's report to Congress was just the first step. So eventually, yes, there will be some kind of regulation coming out. That's why the first step for me is to measure what I'm doing now," Pepin said.

Page 8: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

8

New Federal Data Center Energy Efficiency Guidelines on Tap

By: Mark Fontecchio The federal government and major industry groups are on the cusp of developing widely accepted standards for measuring a data center's energy efficiency.

Along with the Environmental Protection Agency (EPA), the U.S. Department of Energy (DOE) is working with six data center industry groups: 7x24 Exchange, the American Society of Heating, Refrigerating and Air-conditioning Engineers (ASHRAE), the Green Grid, the Silicon Valley Leadership Group, the U.S. Green Building Council, and the Uptime Institute. The goals? To standardize data center efficiency metrics, which could help prevent "greenwashing," and to give data center pros tools to reduce energy consumption in their facilities.

Earlier this month, the coalition agreed to guiding principles regarding power usage effectiveness (PUE), which compares total data center power with IT power used. And by June, the EPA's Energy Star program will launch a benchmarking program that will enable companies to rate their own data centers on a scale of 1 to 100. If they're green enough, data centers can even earn Energy Star status. Data center energy efficiency still poses confusion Does all this matter to end users? In a 2009 SearchDataCenter.com survey, almost 90% of respondents said that reducing power consumption was very important or somewhat important. These standards can help data centers become more energy efficient. Still, while energy efficiency may be on data center pros' priority lists, it's not necessarily at the top. "Our new data center will have significantly less power," said Rick Donohue, the IT director at Americas' SAP Users Group (ASUG) "If I had to justify green, I could do it. But my bigger priority was getting into a class-A data center."

ASUG is moving from an older data center into a newer one, and just the move into a newer building alone can increase a facility's energy efficiency. The same goes for traditional server refreshes. Installing new, less power-hungry servers, is where Energy Star-qualified servers can help. Paul Scheihing, the technology manager for DOE's Industrial Technologies Program, said there has been confusion and inconsistency around how PUE should be measured and reported. As a result, there is some lack of confidence on the PUE figures reported in the media and elsewhere, and therefore uncertainty in how accurate the metrics are, he said.

"If there's confusion, then it's a barrier in terms of people measuring data centers in a comprehensive way," he said.

Page 9: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

9

New principles for energy efficiency For the DOE and EPA, new standards for data center energy-efficiency methods are important, because these organizations are on a mission to reduce overall data center energy consumption. But Scheihing said that they are also important for data center managers, because energy use affects the bottom line. Savings on regular energy costs aside, more efficient data centers spend less on capital equipment such as air conditioners.

The coalition devised three main guiding principles:

PUE is the preferred energy efficiency metric for data centers

To calculate PUE, IT energy consumption should be measured at least at the output of the uninterruptible power supply (UPS). (The industry should work toward measuring the IT load directly at the IT equipment.)

Total energy measurement should be at the point of utility handoff to the data center owner. For a data center in a mixed-use building, total energy should be all the energy required to operate the data center, including IT energy, cooling, lighting and support infrastructure.

On the EPA side, the Energy Star program is updating its Portfolio Manager software to include data center facilities. That should be ready June 7, according to Alexandra Sullivan, an engineer in the EPA's Energy Star program for commercial buildings. The software will allow companies to rate their data center's energy efficiency from 1 to 100, a scale similar to that for other commercial buildings. On that scale, 50 is considered average and any building scoring 75 or higher will receive the Energy Star label. But considering data centers' intense energy footprint, the model used to rate data centers is different.

Between March 2008 and June 2009, Energy Star collected energy information from more than 100 data centers. T then developed a regression model to determine the average data center's energy efficiency. PUE was used as the main efficiency metric, and in most cases the IT load was measured at the UPS. The group's PUE ranged from 1.25 to 3.75, and averaged 1.91.

"I think the benefit for [data center pros] is that they're saving money," Sullivan said about the Energy Star program for data centers. "And they could get an Energy Star plaque to put on their building so that their customers would know. I think there is also increased prevalence of Energy Star in the commercial marketplace."

Page 10: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

10

Data Center Efficiency: Which Tactics are Worth the Cost?

By: Richard Jones Data center managers are not going "green" out of benevolence to the environment and society. It's all about keeping costs low. In this tip, I will explore various data center technologies that claim to improve efficiency and indicate those that are cost effective; saving money from the bottom line of the operations.

Improving data center power use: Low-hanging fruit There are a number of simple and relatively inexpensive changes that organizations can make to reduce energy and operating costs.

Raise data center temperature. Early in 2009, the American Society of Heating, Refrigeration, and Air-conditioning Engineers (ASHRAE) broadened the recommended data center temperature and humidity ranges. This spawned a discussion about reliability at higher temperatures. Hardware vendors publish operating temperature and humidity ranges for their equipment, but what's interesting is that most calculate the equipment mean time between failure (MTBF) at the extremes of these ranges as opposed to the best possible. Examples for a couple common pieces of equipment: Dell PowerEdge r805 is 50-95 degrees Fahrenheit and a Cisco Nexus 5000 is 32-104 F. Raising the data center temperature to 80 F wouldn't be a problem for this modern hardware. Raising the temperature puts less strain on the air conditioning equipment -- traditional refrigeration requires near the same amount of power to operate as the equipment it is cooling, so reducing this consumption can result in lower electrical bills over the course of a year. A dramatic change in energy consumption of up to 30% will be noticed if the temperature of the computer room air conditioning (CRAC) unit can be raised above the dew point. CRAC units running so cool that they condense water from the air (and require humidifiers to add moisture back into the air) require up to 30% more cooling capacity and corresponding energy.

Hot-aisle/cold-aisle containment. Many customers have >improved data center cooling effectiveness by implementing simple plastic curtains to contain hot air and prevent it from mixing with cold air. Efficiency gains depend on how well your data center airflow patterns prevent hot and cold air mixing on the data center floor, but containment can improve air-conditioning efficiency as much as 15%. This can translate into annual electric bill savings of nearly $12,000 per year (assuming 500 servers and an electrical rate of 8 cents per kilowatt hour.)

Air-side economizers. Data centers located in cooler climates can use the environment to cool their servers, reducing the need to operate electricity-hogging refrigeration equipment. Estimates are that in cooler climates, air-side economizers can reduce electrical bills by as much as 33%. In addition, ASHRAE standard 90 requires air-side economizer implementation in certain parts of the U.S., specifically in the dryer, cooler western regions and some cooler northeastern regions.

Page 11: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

11

Consolidation with virtualization. Server virtualization has proven to reduce data center capital and operational expenses over and over in the industry, but this assumes a consolidation ratio that will offset the cost of the virtualization software licensing. A simple calculation that assumes a server cost of $8.5K, virtualization software license cost of $3K both amortized over three years, plus power, maintenance, and administrative costs, will yield a capital plus operating cost reduction of nearly threefold if the server workload consolidation ratio is 10:1. This is a total cost reduction, with the energy reduction from server consolidation nearing tenfold.

Questionable energy-efficiency options Various technologies are touted as energy-saving, hence reducing operating costs for data centers. But many only achieve savings after a very long payback time.

High-efficiency server power supplies. Fortunately, new servers all come with digitally controlled power supplies to ensure greater than 90% efficiency throughout the power supply's load range. But replacing an older server before its end of life in order to gain power savings is a bad move. The payback will not be gained. It is best to let older equipment run its lifecycle course and replace it as it drops out of warrantee and serviceability.

UPS and power distribution upgrades. While improvements in energy efficiency can be had by upgrading uninterruptable power supplies (UPSes) and power distribution units (PDUs), the payback is in years. Like the higher-efficiency server power supplies, replacing an 80%-efficient UPS with a 97%-efficient unit before its scheduled end of life will never yield a payback. Again, it is best to only upgrade when the older unit is no longer serviceable by its manufacturer.

Direct current power. Direct current (DC) power distribution within data centers was recently pushed by vendors. Be wary of efficiency improvement claims, as most compare modern DC power distribution to alternating current (AC) power distribution dating back 20 years or more. Compared to modern AC distribution systems, DC can only achieve a percentage point or two better than AC. The added cost plus difficulty in finding electricians skilled in DC power distribution will never achieve payback for its miniscule efficiency improvement.

Air-side economizers. Yes, I put this one as questionable as well. Hot and humid regions of the world cannot benefit from air-side economizers, and they will end up as an added expense that never achieves any payback in those regions. The ASHRAE standard 90 illustrates this point and does not recommend air-side economizers for hot and humid locations.

Effects of energy-efficiency regulations Energy-efficient solutions that result in operational cost savings are easy for data center managers to justify. The Dec. 2009 climate change summit in Copenhagen, Denmark, is an indication that world governments are paying attention to energy consumption.

Regulations designed to curb energy usage growth will no doubt be forthcoming around the world. The U.S. carbon tax debate will most likely get additional attention in the U.S. Congress in 2010. Energy-related regulations will all be designed to encourage data

Page 12: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

12

center energy reduction. Data center managers should have contingency plans in place to improve efficiency to meet regulatory demands, should they be enacted. Bottom Line Parsing the list of low-hanging fruit, it is not hard to see that server virtualization for consolidation has the greatest potential to reduce energy consumption as well as costs within the data center. Coupled with improvements in x86-based servers in 2009, such as hardware-assisted memory virtualization, many applications that organizations had deemed not fit for virtualizing can now be virtualized. Data centers should assess the suitability of nonvirtualized servers for virtualization once every six months as servers released since early 2009 are all being designed specifically for virtualization.

Page 13: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

13

Server Efficiency

Will Energy Star Servers Give Your Business a Positive ROI?

By: Gary Olsen IT has experienced a real rush to "go green" and the reasoning is easy to understand -- by lowering the energy use of computing equipment, a company can reduce its impact on the environment and save money on power costs in the process. One of the easiest and most straightforward approaches to power conservation is to select and deploy Energy Star-rated systems, which have expanded to embrace enterprise-class servers. While there's little question that lowering power demands is a positive thing, the actual ROI and net impact on your bottom line is anything but certain.

A look at Energy Star Energy Star is a voluntary labeling program developed in 1992 by the U.S. Environmental Protection Agency (EPA) and the US Department of Energy (DoE) to "protect the environment through energy efficient products and practices." Energy Star's goal is to "reduce greenhouse gas emissions and other pollutants caused by the inefficient use of energy" and the EPA uses a branding program that helps consumers identify compliant products. You can actually learn a lot more on the specifics of Energy Star through its website. Given the astonishing growth in data center power demands, the EPA has developed and released an initial version of the Energy Star Computer Server Specification. As you might expect, a new initiative such as this takes time to implement, and there are few enterprise-class servers with Energy Star ratings today. Notable exceptions include HP's ProLiant series, Dell PowerEdge and the IBM Power 750 Express. The standard that defines Energy Star-rated servers is currently version 1.0 (sometimes referred to as Tier 1). The Tier 1 specification began May 15, 2009, and Tier 2 is expected to take effect October 15, 2010, so manufacturers are able to design and market Energy Star-compliant servers today.

Here are the basics of the Energy Star server requirements:

The spec includes a matrix for power supply-efficiency requirements. If the server has a multi-output power supply, for example, the supply should be at 82% efficiency when the server is at full load.

The spec also sets power consumption limits for when the server is idle. For a single-socket server, the limit is 65 watts (W); for four-socket servers, the limit is 300 W. Allowances are made for additional installed components (such as 20 W for another power supply).

Page 14: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

14

Manufacturers must provide a "power and performance data sheet" with each server or each server class detailing power consumption at various load configurations.

You might suspect that the reduced power could adversely affect performance, but this is not the case. The specification makes additional power allowances for configurations with additional hard drives, RAM, power supplies and Ethernet ports. Thus you can have a powerful server and still be Energy Star compliant. Note, however, that servers with more than four processor sockets are excluded from the Tier 1 specification, as well as blade servers, blade chassis, blade storage, network equipment, fully fault-tolerant servers and server appliances. These will likely be included in the future Tier 2 specification.

Cost and savings considerations The challenge with Energy Star is that it's difficult to measure any actual savings. The Energy Star website makes a number of cost-saving and environmental-benefit claims. Specifically, it claims that if all computers sold in the U.S. met the Energy Star standard, energy savings would be over $2 billion per year and the greenhouse gas emissions reduced would be equivalent to that of 3 million vehicles. That sounds great, but you should always be suspicious when claims of cost savings are made, and this is no different.

Considering the vast number of variables involved, savings would be difficult to prove. But there are opportunities to save on energy costs and potential savings from utility and tax incentives.

Many utilities offer incentives for customers to reduce power usage. But it may not be that simple, and you'll need to do a little homework. Some industrial users pay a flat rate set at peak usage to ensure they will get the power when needed. This means that reducing power consumption will have no effect on the power bill and you may actually be penalized for using less power.

A quick visit to your power company's website will provide information about tax credits and rebates available for energy-conservation initiatives, which may include Energy Star products. These products and rate savings vary greatly from one provider to another, so check the local power company's policies. For example, my local power company provides a program that offers commercial users a rebate up to 50% of cost (up to $100) on Energy Star office equipment as well as rebates on other technologies. Companies upgrading their servers may see a small benefit from these types of rebates. See our list of data center utility rebates by state. Figuring ROI for Energy Star Now it's time to determine the ROI for Energy Star-rated computers and peripherals. Calculating ROI, in my opinion, demands proof of what I call "hard cost savings." If I am paying $200 per month on my home power bill and I convert to a number of Energy Star-compliant home products and the bill goes down to $150 per month, that is a hard cost savings. Unfortunately, many supposed cost savings are "soft" -- savings you can't actually prove on the balance sheet.

Page 15: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

15

Figuring the savings is simple in principle -- just multiply the power savings by the number of devices. So if a new Energy Star server saves 10 kWh per month over the old server and you replace 10 of those servers, you save 100 kWh per month. Then it should be a matter of multiplying that savings by the cost of power. You can also toss in the savings of any local power company and/or government credit in purchasing the new equipment.

But there are other issues to consider that are harder to calculate. For example, less power means less heat, and this should (ideally) lower air-conditioning power costs. Energy Star servers typically won't cost more than non-Energy Star servers, so the amount of "investment" in Energy Star functionality is hard to measure in the first place. The real question in ROI is whether the move to Energy Star will have a meaningful effect on the payback for servers that you'd need to buy anyway. There are also costs involved in setting up and configuring the new equipment to enable its power-saving features, disposing of (or recycling) the old equipment, and so on. And these added costs may detract from the ultimate savings. Once you have a savings figure in hand, you can determine whether the move to Energy Star makes sense for your next technology refresh cycle.

Determining power savings to calculate ROI can be tricky when considering servers alone simply because it's difficult to measure and record power data for an individual server. However, when you look at servers as a component of the data center, Energy Star provides tools to obtain quantifiable data. Review the Enterprise Server and Data Center Energy Efficiency Initiatives information for more details. Note the data collection form under the Energy Star Rating Development Process. This is a spreadsheet that is used for program participants to record power savings to be submitted to the Energy Star program. The value here is that it also details how to measure that data. By using this spreadsheet and associated measurement techniques, power savings for the data center can be measured. Of course this will include power savings from other sources in the data center as well as servers, but it will give measurable data for the ROI as well as move toward an Energy Star-rated data center. Making the case for Energy Star servers A positive ROI on Energy Star servers will ultimately depend on the specifics of your own business situation. While you wouldn't run out and replace your current servers just for the sake of Energy Star functionality, I won't argue that there are savings and benefits to utilizing Energy Star-rated equipment and taking advantage of tax credits and power company and manufacturer incentives. I also won't question that these products actually do save power. But I will caution that they may not make a difference in your power bill or nullify the emissions of 10,000 cars. Power costs vary from one power provider to another and from one customer to another -- as I noted, you could see your power bill go up with less power usage. A few common-sense tactics can help you get the most from Energy Star adoption:

Page 16: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

16

Contact your power company to determine if your contract takes advantage of power-reduction initiatives. See if there are options in another type of contract.

Don't assume you will save the world's environment by buying energy-efficient products. Be cautious of "green" advertising and do your homework on product claims.

When calculating ROI, be sure to use "hard" cost-saving estimates. Make sure you can get data to back them up two years from now when the CIO wants to know if you realized the savings.

Don't forget to add tax credits and incentives to the ROI calculation.

Consider using Energy Star's criteria for data center power efficiency as a method for collecting measurable data for calculating the ROI for power savings.

Page 17: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

17

Optimizing Server Energy Efficiency

By: Julius Neudorfer

Data center energy efficiency is the hot topic of the day. IT operators are working to

quantify and improve the efficiency of their data centers, and that means improving

server energy efficiency as well.

Of course we all want the fastest, most powerful servers for our data center. Although energy efficiency (green!) is the buzzword, it seems that historically we think about energy usage only when our power or cooling systems are maxed out and need to be upgraded.

In the rush to optimize, virtualize and consolidate in the name of making computing-related operations more effective and efficient (and, of course, green), we've heard many server manufacturers profess that their products provide the most computing power for the least energy. Only recently have server manufacturers begun to discuss or disclose the efficiency of their servers. Currently there are no real standards for overall server energy efficiency.

There are several key components that impact the total energy consumed by a typical server.

Power supply

Fans

CPU

Memory

Hard drives

I/O cards and ports

Other motherboard components -- supporting chip sets

These components exist in both conventional servers and blade servers, but in the case of blade servers, some items -- such as power supplies, fans and I/O ports -- are shared on a common chassis, while the CPU and other related motherboard items are located on the individual blades. Depending on the design of the blade server, the hard drives can be located on either the chassis or the blades.

In addition to the components listed above, OS and virtualization software impacts the overall usable computing throughput of the hardware platform.

Don't judge a server by its nameplate

When we need to know how much power the server requires, we usually turn to the nameplate. However, the nameplate simply represents the maximum amount of power the unit could draw, not what it actually draws. Let's examine where power goes and what it really costs to operate a server. We don't always stop to think what it costs to operate a "small" server that typically consumes 500 W of power. That server also needs

Page 18: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

18

to cool 500 W of heat load (approximately 1,700 BTU). The typical data center has a power usage effectiveness (PUE) of 2.0, which means that it uses 1 W to support (power losses and cooling power) each watt of "plug power" to the IT load itself. This means that it takes 1,000 W, or 1 kW, of power for the data center to run a small 500 W server. A single kilowatt does not sound like much in a data center until you factor in that it is consumed continuously (the proverbial 7/24/365), which adds up to 8,760 kWh per year!

At 11.5 cents per kWh, 1 kW costs $1,000 per year. (Of course, 11.5 cents is just an average, and in many areas the cost is much higher). Over a three-year period, that one "small" 500 W server can cost $3,000 or more just in energy consumption. In fact, since many of these small servers cost less than $3,000, you can see why some analysts have predicted that the power to operate a server will exceed the server's price, especially as the cost of energy rises.

Let's examine where the power goes and what we can do to optimize it.

Power supplies The power supply is, of course, where power enters the server and is converted from 120-240 V AC to 3.3, 5 and 12 V DC. Until recently, efficiency numbers were unpublished. In fact, the Environmental Protection Agency Energy Star Program, which mandated that all PCs have power supplies of at least 80% efficiency, specifically exempted servers! This is one area where a few extra dollars spent to purchase a server with an 80% or greater efficiency rating can pay back large returns in energy cost savings over the estimated operational three- to five-year life of the server.

The difference between a 70% and an 87% efficient power supply results in a 20% overall energy savings for server power usage (assuming that same internal server load), which also means a similar range of overall energy reduction for the data center.

Moreover, these efficiency ratings are usually only provided at the power supply maximum-rated load. That does not reflect the actual loads the server will operate at in production. Typically, a server only draws 30% to 50% of the maximum power supply rating (the number on the nameplate), which means that fixed losses in the power supply will result in less than the rated power supply efficiency value at full load. Moreover, since we also want redundancy to improve uptime, we typically order servers with redundant power supplies. These redundant power supplies normally share the internal load, resulting in each power supply only supplying half of the actual load, which means that each power supply only runs at 13% to 25% of rated load. This means that the fixed losses are a greater percentage of the actual internal power that is drawn by the internal server components.

When buying new servers, the least expensive unit may not be the best choice, even if the computing performance specifications are the same. When specifying a new server, this is one of the best places to start saving energy. If the server vendor doesn't publish

Page 19: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

19

or can't provide the power supply efficiency, think twice about whether the server is a good value.

In fact, if you're shopping for a large number of servers, it pays to invest in testing the total power drawn by the manufacturers and models you're considering, specifically when loaded with your OS and applications, both at idle and a full-computing load. By spending an extra $50 to $100 on a more efficient server now, you may save several hundreds of dollars in total energy costs over the life of the server. Moreover, it may not be necessary to upgrade your power and cooling infrastructure.

Another method that can save about 2% to 3% in energy usage is to operate the servers at 208 or 240 V instead of 120 V, since power supplies (and the power distribution system) are more efficient at higher voltages.

Server fans After the power supply, server fans are the heaviest consumers of power (other than the computing-related components themselves). As servers have become smaller and commonly pack several multicore CPUs in a 1U high server, multiple small, high-velocity fans are needed to move a sufficient amount of air through the server. They need to push air through very small restrictive airflow spaces within the server as well as through the small intake and exhaust areas at the front and rear of the server chassis. These fans can consume 10% to 15% or more of the total power drawn by the server. The fans draw power from the power supply, thus increasing the input power to the server, again multiplied by the inefficiency of the power supply. In addition, most or all the airflow in 1U servers is routed through the power supply fans since there is virtually no free area on the rear panel to exhaust hot air.

To improve efficiency, many new servers have thermostatically controlled fans, which raise the fan speed as more airflow is needed to cool the server. This is an improvement over the old method of fixed-speed server fans that run at maximum speed all the time, but these variable-speed fans still require a lot of energy as internal heat loads and/or input air temperature rise.

For example, if the server internal CPUs and other computing-related components draw 250 to 350 W from the power supply, the fans may require 30 to 75 W to keep enough air moving through the server. This results in an overall increase in server power draw as heat density (and air temperature) rises in the data center. In fact, studies that measured and plotted fan energy use versus server power and inlet air temperatures show some very steep, fan-related power curves in temperature-controlled fans of small servers.

CPU efficiency The CPU is the heart of every server and the largest computing-related power draw. While both Intel and AMD offer many families of CPUs, all aimed at providing more computing power per watt, the overall power requirement of servers has continued to rise along with the demand for computing power. For example, the power requirement for the Intel CPU varies from 40 to 80 W for a Dual-Core Intel Xeon Processor to 50 to 120 W for a Quad-Core Processor, depending on version and clock speed. As mentioned

Page 20: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

20

previously, many servers are configured with two, four or even eight dual- or quad-core CPUs. Naturally, we all want the fastest servers we can buy and hope they have a three-year usable life before the next wave of software or applications overwhelms them.

It has been well documented that the average CPU is idle over 90% of the time and only hits peak demand for very short periods, yet continuously draws a substantial portion of its maximum power requirement 24 hours a day. Moreover, even when servers are equipped with power-saving features in their hardware and software (as most are), these features are usually disabled by server administrators.

One of the primary goals of virtualization is to decrease the number of servers that are mostly running at idle and consolidate their functions and applications to fewer, more powerful servers that run a higher average utilization rate.

The number and types of CPUs you choose will ultimately depend on the performance requirements and computing loads your applications face. By trying to match the computing load with the number of CPUs and their performance capabilities, you will optimize the efficiency of each server.

Memory efficiency When specifying the configuration of a server, memory is often overlooked as a factor that determines the overall actual power usage.

Memory chips vary widely from vendor to vendor and their power consumption is usually not well documented. Generally speaking, the more memory there is per chipset module, the lower the power per gigabit of memory. Also, the faster the memory is, the more power it draws (this is tied into the speed of the server's memory buss and CPUs)

Example: A major manufacturer's server power estimator tool shows the following power directly attributable to memory for a 1U server equipped with a 5160 3.0 GHz CPU with 1.333 GHz FSB.

Total memory (GB) Memory module size Number of modules Watts Watts per GB

8 1 8 64 8.00

8 2 4 40 5.00

8 4 2 22 2.75

16 2 8 81 5.06

16 4 4 44 2.75

32 4 8 89 2.78

Page 21: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

21

Ideally, get as much memory as your application needs but do not maximize server memory based on the belief that you can never have too much memory. Over-specified, unused memory increases initial costs and draws unnecessary power over the life of the server. Even though they sometimes cost more per gigabit, larger, more energy-efficient memory chips can lower the amount of power consumed over the life of the server. Also, there are more sockets open if you need to add more memory in the future.

For example, simply using 4 GB modules rather than 1 GB modules consumes 42 W less per server. This saves $84 of energy costs per year, or $252 in total energy cost savings for the three-year typical life of the server (based on 42 W memory energy, plus 42 W of infrastructure support power, at a PUE of 2.0).

Hard drives The capacity, physical density and energy efficiency of hard drives have outpaced the performance increases of many other computing components. We seem to have an insatiable appetite for data storage, which means that it is almost a zero-sum gain. However, the power required by the newer, small form factor 2.5-inch drives is fairly low when compared with full-size 3.5-inch drives of a generation ago. (Remember when the typical server drive was 1.25-inch half-height and 5.25 inches wide -- seems like it was just the last millennium!)

Also, since the magnetic density of the media continues to increase per platter, larger-capacity hard drives use the same energy as smaller-capacity drives (assuming the same drive type). For example, when in use, the enterprise 2.5-inch Seagate Savvio 15K RPM drive consumes approximately 10 W, and 6 W when idle. The 36 GB and 72 GB versions use the same power. Spindle speed has a direct effect on power efficiency in the same class of 10,000 RPM drive. Both 146 GB and 300 GB drives consume 7 W when in use and 3.5 W when idle. Unless you have a specialized application that requires faster disk response, the 10,000 RPM drive offers far more storage per watt for general-purpose storage. Consider using the lower-power drives when possible -- the power savings add up.

Hard drive chart: Seagate Savvio enterprise 2.5-inch drives

RPM Drive size (GB) Idle watts Active watts GB per idle watt GB per active watt

15,000 36 6 10 6 3.6

15,000 76 6 10 12 7.2

10,000 146 3.5 7 42 21

10,000 300 3.5 7 85 43

Recently, solid-state drives (SSD) for notebooks have increased in capacity to as much as 512 GB and have started to come down in price. They'll soon make inroads to the server market, resulting in even more energy savings, especially when compared with 15,000 RPM drives.

Page 22: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

22

Of course, check with your server vendor to see what your OEM drive options are.

I/O cards and ports While most IT employees don't consider how much power is drawn by network interface cards (NICs) and I/O cards, they present an opportunity to save several watts per server. Some servers come with embedded cards, while others use add-on cards or a combination of both. The chart below shows the range of power used by the cards. Check the specs on your card of choice. It is either displayed in watts or shows current draw and voltage. To calculate the power in watts, multiply amps by volts (i.e., 1.2 A x 5 V = 6 W).

I/O card chart (watts per card)

Device Low High Watts saved

Network card – Gigabit Ethernet (GbE) 3.3 22 7

Network card – 10 GbE 10 25 12

RAID controller 10 24 14

Fibre Channel 5 20 10

When selecting an NIC, we tend to want the fastest throughput and often fail to consider

power usage. For example; Intel makes several NICs ranging in power from the Intel

PRO/1000 PT, which draws only 3.3 W, to a 10 Gb Dual Fiber XF card, which draws 14 W.

In the case of OEM server NICs, a major manufacturer's power estimator tool indicates

22 W for its OEM PCI GbE card. Since many servers have embedded NICs, they may draw

power even if they are disabled. If you intend to use multiple NICs for redundancy or

throughput, a careful comparison of internal or OEM cards can save several watts per

card.

Other motherboard components: Supporting chipsets In order to form a complete system, each server requires its own supporting chipsets. It is beyond the scope of this article to try to compare the variety of systems on the market. This is where the vendors can each tout their claims that its server is the most energy-efficient system on the market. If the system motherboard is already equipped with the majority of onboard NICs, RAID controller or other I/O devices to meet your requirements, you may not need to add additional cards.

Each major manufacturer seems to have a power estimating tool for its servers. It is not meant to be an absolute indicator of the actual power that the server will draw, but it will provide a good estimate and a way to compare different components and configurations.

Page 23: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

23

Improving server efficiency: The bottom line The chart below is a hypothetical comparison of two servers. As you can see, Server A uses lower-efficiency or older components. Server-B uses the latest, most efficient components.

Component< Watts used: Server A Watts used: Server B Watts saved Percent saved

Fans 75 50 25 33%

CPU 100 80 20 20%

Memory (16 GB) 81 44 37 46%

Hard drives (6) 60 40 20 33%

I/O cards 30 20 10 33%

Motherboard 30 20 10 33%

Total DC power 376 254 122 32%

Power supply 125 41 84 67%

AC input power 501 295 206 41%

Power supply

Server AC input (watts) Efficiency DC output (watts) Losses (watts)

A 501 75% 376 125

B 295 86% 254 41

Carefully comparing and selecting more efficient components and configuration options can potentially result in a 41% power saving, or over 200 W. In a data center with a PUE of 2.0, each server can save up to 400 W.

All these factors help to determine how much power your data center consumes. Carefully specifying and configuring your servers to meet but not exceed your computing requirements can add up to a savings of $2 per year for each watt you conserve.

Put another way, each watt per server that that is saved represents over 50 kWh, based on a three-year service life of each server (8,760 x 3 = 26,280 @ PUE of 2.0 = 52.560 kWh). In the above example, if 200 W is saved per server, it results in an energy saving of more than 10 Megawatt hours over a three-year period.

Page 24: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

24

You can achieve a 10% to 20% difference in actual server power used in your production environment, which can save thousands of dollars per year in energy costs. Or it could mean the difference between needing to upgrade your data center power and cooling or server room or continuing to operate with the existing capacity of your infrastructure.

The last recommendation, and perhaps the most simple and effective method to save energy, is to review the status and purpose of every IT device in your data center. Many studies have shown that there are a significant number of servers and other IT devices that are no longer in production but are still powered up. No one seems to know which applications or functions they support, but no one wants the responsibility of switching them off. Take a total device inventory regularly -- you may find several servers, routers and switches that are unused and powered up. Once you find them, just turn them off.

Page 25: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

25

How Server Consolidation Benefits Your Data Center

By: Stephen J. Bigelow Server consolidation increases the effective utilization of server hardware by allowing one physical server to host multiple virtual machine (VM) instances. Most traditional non-virtualized servers are only utilized at 5% to 10% of their total computing capacity. By adding a virtualization platform to the server (such as Citrix XenServer, VMware vSphere or Hyper-V in Windows Server 2008 R2), the server can operate its original workload as a virtual machine, and host additional virtual workloads simultaneously -- often increasing the total utilization of the physical server from 50% to 80% of its total computing capacity.

But computing efficiency through consolidation is only one benefit of server consolidation. With more workloads running on less hardware, power and cooling demands are also lowered. This translates to lower operating costs for the business, and can also forestall capital-intensive facilities projects.

Server consolidation with virtualization allows the flexibility to seamlessly migrate workloads between physical servers -- literally moving workloads at-will or as-needed. For example, a traditional server would have to be taken offline for maintenance or upgrades. With virtualization, all of the server's consolidated workloads can be migrated to a spare server or distributed amongst other servers, and then the original server can be shut down without any disruption to the workloads. Once the work is completed, the workloads can be migrated back to the original hardware. Workloads from a failing server can likewise be failed over or restarted on other servers, minimizing the effect of hardware problems.

Virtualization is also a boon to data protection, and workloads consolidated with virtualization can easily be copied with periodic point-in-time snapshots or replicated to off-site storage systems with little (if any) of the performance penalty experienced with traditional tape backup systems.

Even with a wealth of benefits, however, successful server consolidation requires a careful server consolidation strategy. First, consolidation should be approached in phases. Start by virtualizing and consolidating non-critical or low-priority workloads. Administrators can gain valuable experience with server consolidation tools. Then With more experience, you can then systematically virtualize and consolidate more important workloads until you tackle the most mission-critical applications.

The distribution of those virtualized workloads can make a tremendous difference in the success of your consolidation project. Since each workload can demand different computing resources, it's important to measure the needs of each workload and allocate workloads so that the underlying host servers are not overloaded -- a process known as "workload balancing". For example, it's often better to distribute CPU-intensive workloads on different servers rather than putting them on the same server. This prevents resource shortages that can cause workload performance or stability problems.

Page 26: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

26

Measuring Server Energy Efficiency

By: Dan Diesso Increasing power densities and soaring energy costs make measuring the energy efficiency of servers more important than ever. Obtaining these metrics requires the proper tools and methods to achieve accurate results.

Many people attempt to measure power using an amp meter. There are two problems with this approach.

First, power is measured in watts not amps. Watts = amps x volts x power factor. Attempting to use an amp meter to measure power forces to you guess at the voltage and power factor of the circuit creating the potential for considerable error.

The second problem is both amps and watts are instantaneous measurements. Even an accurate power measurement still only tells you about a servers "performance" at that particular moment.

To accurately measure a server's efficiency while running benchmark tests you need to measure the cumulative power consumption over the entire test. Cumulative power consumption over time is an energy metric measured in watt-hours. For this you need an electric watt-hour meter. Watt-hour meters are designed to continuously monitor the amperage, voltage and power factor of a circuit to accurately determine the true energy usage. Once you have energy measurements for various servers running the same benchmark tests you have way to compare the workload achieved for the amount of energy consumed.

There are a few important things to consider when choosing an appropriate watt-hour meter. One is the resolution of the meter. Many watt-hour meters are intended to monitor large loads over long periods of time and accumulate their readings in kWh (kilowatt-hours). This means the accumulator only increments each time there is 1,000 watt-hours of energy usage. Attempting to use such a low-resolution meter on smaller circuits over short periods is like using a calendar to measure a 100-yard dash. A high-resolution meter that measures in watt-hour resolution or better is needed for this type of application.

Watt-hour meters come in many form factors. Some meters can simply be plugged inline between the load and source using standard plugs and receptacles. Other meters require you tap into the circuit to obtain the voltage measurements and clamp a current transformer around a conductor of the circuit to obtain the amperage measurements. There are also branch circuit watt-hour metering systems available that can measure the energy usage of each individual circuit within a power distribution unit.

A functional consideration is that some watt-hour meters provide network connectivity for automated reading while others require reading them visually from a digital display. A

Page 27: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

27

big advantage of meters with network connectivity is you can easily log a series of readings for trending and analysis.

Power and energy measurements are important metrics for data center operators today. In addition to determining equipment efficiency these metrics also indicate heat loads to assist with cooling management. Energy measurements can also be used to allocate energy costs to those responsible for their consumption creating an ongoing incentive to deploy energy efficient equipment.

Page 28: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

28

EPA Releases Energy Star Server Specification

By: Mark Fontecchio

The federal Environmental Protection Agency this week released a final Energy Star

computer server specification, which covers most machines with one to four sockets. It has been more than two years since the EPA began to consider an Energy Star label for servers. Prompted by the IT industry's interest in data center power consumption, the Energy Star spec went through multiple drafts, and is still far from complete. As of now, it covers standalone servers with one to four processor sockets. Expected in October 2010, a second tier to the specification will cover servers with more than four processor sockets, blade servers, and fault-tolerant machines, among other things. Energy Star is an EPA labeling program meant to help consumers pick out energy-efficient products. The program currently includes scores of items, including ceiling fans, dishwashers and desktop computers. If a manufacturer qualifies its product, it can slap an Energy Star label on it, and the product information can also be displayed on the manufacturer's and the Energy Star Web site.

"I really think it's an important first step," said Andrew Fanara of the Energy Star's product development team. Fanara helped spearhead the process of getting a spec for servers. "I think you will start to see businesses and government agencies change their procurement policies to buy Energy Star unless you have a really good reason not to."

Here are the basics of the new benchmark:

The spec includes a matrix for power supply efficiency requirements. If the server has a multi-output power supply, for example, the supply should be at 82% efficiency when the server is at full load.

The spec also sets power consumption limits for when the server is idle. For a single-socket server, the limit is 65 watts; for four-socket servers, the limit is 300 W. Allowances are made for additional installed components (such as 20 W for another power supply).

Manufacturers must provide a "power and performance data sheet" with each server, or each server class, detailing power consumption at various load configurations.

Different end users will use the Energy Star spec in different ways, reported Fanara. For many large organizations already doing their own rigorous testing, the Energy Star server rating could be "a little simplistic for them" but could still augment their understanding of "the energy profile of these products."

"This will probably get more use in smaller organizations, because they have less resources to go out and get all this information on their own," he said.

Major server manufacturers are already submitting their products for Energy Star approval. Hewlett-Packard Co. said this week that two of its servers -- the DL360 and DL380 G6 -- now meet Energy Star requirements, and it expects that

Page 29: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

29

seven more servers will be added to the list soon. IBM and Sun Microsystems have touted similar offerings. "This is a great first step, but it's not a complete spec," said Subodh Bapat, a distinguished engineer at Sun. "It's a good start toward finding out which servers are better than others on an energy basis."

What else is coming? The Tier 2 spec, in addition to covering more classes of server, will also look to define a metric that compares server performance with energy consumption. Finding that magic number -- or as Fanara speculates, numbers plural -- could take a while.

"I've had a number of emails regarding the spec saying that they want to be able to identify efficient hardware, but they also want to know how efficiently the server computes," he said.

The EPA is working an Energy Star spec for data center facilities and is collecting data from volunteering data centers now. And Fanara said his group hopes to have a framework document for an Energy Star for data storage equipment out in the next two weeks.

Page 30: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

30

Data Center Managers Indifferent to Energy Star for Servers

By: Mark Fontecchio

Many data center managers are indifferent to the EPA's new Energy Star program for

servers, saying they're unsure of the impact -- if any -- the server specification will have

on their purchasing decisions.

Pete Simpson, the data center operations director at Indianapolis-based insurance claims company Real Med Corp., said the Energy Star rating is "irrelevant" to him.

"The major manufacturers will all produce compliant equipment as they rev their products, so over time these will become standard features of all but the el-cheapo white-box server manufacturers," Simpson said. Besides, being "green" isn't a company priority right now; more important factors are reliability, processing power, and memory access times, among other things.

"We currently have plenty of power and cooling capacity, so I'm not up against a wall for more power efficiency and less heat load," he said.

Paying a premium for energy efficiency?

The Energy Star program covers dozens of household appliances, including washing

machines, ceiling fans, and laptop computers. This month, the EPA rolled out the first

version of its Energy Star spec for servers, capping more than two years of work

developing a federal metric for server energy efficiency.

All the major server manufacturers plan to contribute to the program, although the Energy Star's current list includes only four Intel-based HP ProLiant servers. For now, vendors said they don't expect to charge a premium for Energy Star-compliant servers, but that could change. "The focus right now is on the energy-efficient design of our systems," said Elisabeth Stahl, an IBM chief technical strategist. "At this point in time, for Power Systems as an example, a premium is not anticipated." Stahl added that "if system components were changed to meet requirements, that could affect the total system configuration."

HP and Sun Microsystems echoed Stahl's sentiments. Subodh Bapat, a distinguished engineer at Sun, said the company's focus is now on the "evaluation process" but that it expected to be competitive on price. HP, meanwhile, said that it will not charge a premium for Energy Star servers.

Virtualization challenges Energy Star's value

The relevance of the Energy Star qualification is especially questionable in larger data

centers doing server virtualization. At least for the first version of the Energy Star spec,

the only servers that can qualify must have four or fewer processor sockets. The EPA is

developing a second version of the spec, but it's not due out until next year.

Page 31: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

31

Some data center managers will take a close look at Energy Star qualified servers, but only if they meet their needs. Timothy Happychuk, the IT director at the Canadian media company Quebecor, said that "smaller servers with [fewer] CPU cores and more aggressive ramp down technologies will naturally have an easier time gaining a coveted EPA sticker but would be a poor choice for high-density virtualization platforms as the technology currently stands."

Nor would buying Energy Star servers necessarily be better for the environment, Happychuk said. "If I buy 10 low-power, EPA-approved servers as opposed to one high-end platform for hosting high-density virtual systems that may just miss the criteria, does the combined material, production, processing and ongoing operational costs for the former solution actually outweigh the true environmental cost of the latter?" he added.

Lance Kekel, a data center manager at a jewelry company in the Midwest, added that many of his counterparts in IT might not even know about the Energy Star spec yet. And once they do, it could take a while before it has any effect on purchasing decisions.

"While I've mentioned this to my senior management team I do not know if there have been any discussions with those that purchase the servers," he said. "While I'm responsible for the physical room and the repair and maintenance of servers once on the floor, I don't play much of a role in the initial purchase phases."

Page 32: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

32

Energy Efficient Data Center Cooling

Air Flow Management Strategies for Efficient Data Center Cooling

By: Vali Sorell

One of the most common complaints that design engineers hear from data center owners and operators is that they need additional cooling capacity because the existing system doesn't maintain an acceptable temperature at the data equipment inlets. But in most cases, the problem isn't one of insufficient capacity, but of poor air flow management. The good news is that adopting a strategy to improve data center air flow results in two positive changes. First, by reducing the amount of air that needs to be supplied, less energy is used for data center cooling. Second, temperature distribution across cabinets is improved. Improving air flow in a facility requires that all the air flow supplied to the data room produces effective cooling. Air flow waste should be minimized. To understand the implications of this goal, it is important to understand the basics of heat transfer.

Basic heat transfer calculation The basic equation of heat transfer for air at sea level is Q = 1.085 x ∆T x CFM.

Q is the amount of heat transfer. 1.085 is a constant that incorporates the specific heat and density of air (at sea level and 1 atmosphere). ∆T is the rise of temperature of the air. CFM is the air flow (cubic feet per minute).

Computer equipment moves air through the use of internal fans to remove heat from the processors and internal circuitry. An air-handling unit (AHU) moves air with its own fan to remove the congregate heat load generated by the computer equipment (the IT load). Unfortunately, these two air flows are rarely ever the same. However, the heat transferred from the IT equipment to the AHUs is the same. The basic equations can be restated.

QAHU = 1.085 x ∆TAHU x CFMAHU

QIT = 1.085 x ∆TIT x CFMIT

QAHU = QIT

Reconfiguring and simplifying, the equation can be expressed as: CFMAHU = CFMIT (∆TIT / ∆TAHU ) (Equation 1)

The biggest culprit leading to poor air flow conditions in data centers is bypass air flow. Bypass air flow is cold supply air that does not lead to productive cooling at the IT load. In

Page 33: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

33

essence, it passes around the load and mixes with warm room air before returning to the AHU. For comfort cooling applications, this mixed condition is not only acceptable, it's considered good practice. However, that is not the case for a data center environment. The occupants of a data center are the cabinets, and the cabinets' comfort is gauged strictly by conformance with an industry-accepted thermal envelope (or range) that applies only at the inlets of the datacom equipment. (A temperature in excess of 100°F on the backside of the server, or anywhere else on the server other than at the air inlets, is irrelevant to the server's comfort.)

Recirculation air is bypass air's partner in crime. When an insufficient amount of supply air (CFMAHU) is delivered to the equipment inside the cabinets (because the bypass component is large), the server fans pull air (CFMIT) from the most immediate source -- the warm air circulating nearby. For a fixed source of CFMAHU, the larger the proportion of the flow that goes to bypass, the larger the amount made up by recirculation air will be.

In order to guarantee that server inlet temperatures don't exceed the maximum recommended temperature, the most immediate solution may seem to be to use colder supply air. But since that doesn't change the proportion of air going to bypass, some servers will still be subjected to recirculation air, which could put the servers at risk. The users of the space will conclude that if there are hot spots, there is insufficient cooling available. One could lower the AHU supply air temperature even further until the hot-aisle temperatures fall under the maximum recommended temperature of the servers. With this approach, recirculation wouldn't appear to pose a significant problem since the recirculated air is still low enough for the servers. However, this approach is wasteful because it forces the supply air temperatures down to the mid 50s. (Isn't this the way data centers used to operate?) At these low supply air temperatures, the plant operates less efficiently, the AHU coil dehumidifies (forcing the system to add moisture back to the space to maintain a minimum space dew point), and the hours of outdoor air cooling are severely reduced.

The other solution would seem to be to add more air. That approach doesn't work either. Looking at Equation 1 above, one can see that increasing CFMAHU must decrease ∆TAHU. Keep in mind that QIT doesn't change regardless of what happens with the air flow. QAHU is the load cooled by the sum total of all AHUs, regardless of how many AHUs are available, and will always equal QIT. The only thing that is achieved by increasing CFMAHU is that the cold supply air will eventually reach the tops of the cabinets, and presumably the warmest server inlets, by the brute force approach. But at what cost? How much bypass air must result in order to address those hot spots? How can one tell that this is happening in his/her data center? The answer is straightforward -- by looking at the ∆TAHU at all the AHUs. If the average ∆TAHU is half of the average ∆TIT, then the AHUs are pushing twice as much air as the server fans need.

Containment is one approach that completely eliminates bypass and recirculation air. By closing off the hot and cold aisles (or ducting the hot return air out of the cabinets), the air flow dynamics within the data center are forced such that CFMAHU = CFMIT. This in turn forces ∆TAHU to equal ∆TIT.

Page 34: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

34

Why, then, don't all data centers use containment? Some users don't like that containment restricts access to the cabinets, cable trays or aisles. A less obvious problem is that containment requires a carefully planned control strategy to prevent excessive pressure differences between hot and cold aisles. If the pressurization control strategy is wrong, the server fans could starve for air, which could cause them to increase speed in order to maintain acceptable processor temperatures. The result is that the servers will increase the CFMIT to the max amount, and the servers' energy consumption would increase.

Anecdotally, it appears that few data centers operating at less than 200 watts per square foot use containment. The simple truth is that with good air flow management strategies, the result of bypass and recirculation air flows can be mitigated. The remainder of this article will address these strategies as they relate to non-contained spaces.

Data center air flow control best practices Create hot and cold aisles. The most obvious air flow management strategy is to separate hot and cold air streams by arranging all the cabinets in parallel rows with the inlet sides of the servers facing each other across an aisle (this forms a cold aisle). This is the first step toward preventing a well-mixed thermal environment. Closing gaps between adjacent cabinets within each lineup also helps to reduce bypass and recirculation air flows.

Install blanking panels in all open slots within each cabinet. It's easy to forget that bypass and recirculation can occur inside cabinets. An air flow management system cannot effectively cool the equipment in a cabinet without eliminating internal paths of bypass and recirculation. Blanking panels reduce these air flows and are considered a must for proper air flow inside a cabinet. Recognizing that blanking panels are frequently removed and not replaced during installation or removal of hardware within a cabinet, it would make sense for the IT staff to populate equipment from the bottom of the cabinet up, making sure there are no gaps between servers. In this manner, internal recirculation can be minimized.

Place perforated tiles in cold aisles only. Placing perforated tiles, or perfs, in any location but cold aisles increases bypass. There is never a justification for placing perforated tiles in hot aisles unless it's a maintenance tile. A maintenance tile can be carried to where work is being done in a hot aisle. An IT employee can work in a hot aisle, standing on the tile in relative comfort, but the tile should not be left in the hot aisle permanently.

Use air restrictors to close unprotected openings at cable cutouts. A single unprotected opening of approximately 12" x 6" can bypass enough air to reduce the system cooling capacity by 1 KW of cabinet load. When each cabinet has a cable cutout, a large proportion of the cooling capacity is lost to bypass.

Seal gaps between raised floors and walls, columns and other structural members. Sealing the spaces between the raised floors and room walls is a no-brainer. Those gaps are easily identified by a simple visual inspection. A more subtle form of bypass can be found when column walls are not finished above the ceiling and below the floor. Often,

Page 35: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

35

the sheet rock used to enclose a column forms a chase for direct bypass of air into the return air stream. These chases must be sealed to reduce bypass air flow.

Use appropriate selection of tiles. Frequently, users address air shortage and hot spots by installing high-capacity grates in the floor near the hot spots. Grates typically pass three times more air than perfs at a given pressure difference. Although placing grates at the hot spots may seem like it solves the problem, it actually makes the problem worse. When the grates are installed in a raised-floor environment dominated by perfs, and that under-floor space is maintained at a fixed pressure, the output of the grate is such that the air will blow off the top of the aisle with very little capture at the cabinets. A typical grate will pass 1,500 CFMAHU at 0.03" (a typical under-floor pressure for perfs). Most of that air, with a capacity to cool up to 10 kW, will be bypassed, forcing the user of the space to run more AHU capacity and lowering the ∆TAHU.

It's important to decide the width of each cold aisle early in the data room planning process since the aisle's width determines the amount of cooling that can be delivered to it. If perfs will be used, all the cold aisles that share the under-floor plenum should be supplied with perfs. If the space will be subjected to higher loads, grates should be used in all cold aisles that share the same under-floor plenum. In addition, that under-floor plenum pressure should be reduced to approximately half of what is typically used for perfs in order to avoid the bypass air associated with the air blowing off the top of the cold aisle.

Manage the placement of perforated tiles by cold aisle. Calculate the load by cold aisle and place an appropriate number of perfs or grates (but not perfs and grates) to cool the load in that aisle. Placing too few tiles in the cold aisle will cause recirculation. Placing too many will increase the amount of bypass. If one needs to choose between a little recirculation and a little bypass, the latter is always the better deal.

The user of the space must keep track of the load by cold aisle. When the cold-aisle loads change, the number of tiles must be adjusted accordingly.

There are many factors involved in determining the optimum amount of bypass. Without these best practices to reduce bypass and recirculation air flows, the amount of bypass could be such that CFMAHU is 50% to 100% larger than CFMIT. With the best practices presented here, it may be possible to achieve a disparity of 25% or less.

Page 36: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

36

Lowering Data Center Cooling Costs with Airflow Modeling and

Perforated Raised-Floor Tiles

By Lucian Lipinsky de Orlov In the "good ol' days" of mainframes -- now called enterprise servers -- the objective was to cool the whole data center, keeping everything at one uniform temperature of around 55 degrees Fahrenheit (13 degrees Celsius). Now that we've disguised blast furnaces as server racks, the data center cooling model has changed. We now aim to cool individual pieces of equipment and are not as concerned with overall ambient temperatures.

We can also learn a great deal about data center cooling from the Mc DLT sandwich that McDonald's introduced in 1985. A physical barrier in the packaging kept the hot part of the burger (the patty) hot and the cold side of the meal (lettuce and tomato) cool. McDonald's patented packaging is analogous to the today's data center cooling strategy -- don't mix hot and cold air. An Uptime Institute study of 19 computer rooms with a total of 204,000 square feet determined that most facilities are ill prepared to remove current heat loads, let alone those of blade servers. One data center in the study had 25% of its servers running hot, yet the room cooling capacity was 10 times what it needed to be. That equates to 10 times the cooling costs -- a significant and unnecessary expense that offered an opportunity to effortlessly reduce data center costs. Every computer room air conditioner (CRAC) taken out of service can save around $10,000 annually in maintenance and operational expenses. Airflow modeling The basic issue is that it's a rare exception that a data center's airflow is planned based on data and facts. It's hard to manage what can't be seen. Every CIO or CTO should require that his data center manager model the facility's airflow and use the results to architect tile layout and equipment placement.

Two camps exist between those who believe that actual temperature and airflow should be measured with probes and meters and those who believe that mathematical modeling is sufficient. The use of actual measurements may be more accurate (at a point in time), but it takes a great deal of time and, more importantly, doesn't enable the modeling of new equipment arrangements or the design of new data centers.

Mathematical airflow models have been validated by predicting expected results and testing them against actual measurements in real data centers. The differences between the two sets of measurements were statistically insignificant. Through the use of air flow modeling, a consistent set of data can be used to compare existing and future designs. Modeling also enables testing of component failure and the impact of a loss of an air conditioner to equipment operations.

Page 37: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

37

Data center tile placement Most people involved in data center operations are generally not aware of the science behind the placement of perforated raised-floor tiles and which percentage of perforation to use. When asked why a particular decision was made for a tile's placement, the answer is usually, "Because it felt warm here."

Modeling can also demonstrate the impact of unmanaged bypass airflow, or cold air that is not used to cool equipment before returning to the CRAC. This typically includes cable openings in the raised floor and other places where underflow air leaks out. One study found that perforated tile airflow improved by 66% just by sealing cabling openings. This led to a 3 kW increase in available rack power consumption and an elimination of hot spots. Fifty-three percent of cooled air can escape through such gaps without removing any equipment heat. In order to maintain a constant operating temperature, a server needs an amount of cool air equivalent to the power it consumes. Bypass air, blockages, recirculation hot air and airflow restrictions impact this, but the front of a piece of equipment, where the equipment draws in cool air, needs a very specific amount of air measured in cubic feet per minute (CFM). The typical plan is also designed so that air exiting the equipment rises 20 degrees Fahrenheit (7 degrees Celsius) from the air entering the equipment.

The two-foot-by-two-foot tile directly in front of a server rack should be the sole source of that rack's cooling needs. All the airflow coming out of the tile should be drawn into the rack and the installed equipment.

Perforated tiles and the magic number The question is what size perforations the tile should have. An optimal selection needs to be made so the equipment temperature is maintained while wasted cooling capacity, flow and pressure is limited. The first step is to determine peak power consumption -- that's the number listed on the placard typically on the back of the equipment. However, that number does not represent what the equipment actually draws. The real number is somewhere between 25% to 60% of what's on the placard, and it depends on whether the server or equipment is constantly running or only running during certain periods. Using 45% to 50% of the rated power draw is a good target. Today, the leading practice is to install active energy monitoring and management software to identify and track component power consumption.

The heat management of servers and other data center equipment is designed to increase the incoming air for heat removal by 20 degrees Fahrenheit while maintaining a consistent internal equipment temperature. There is a relationship between heat load (the equipment power consumption) and airflow rate (the cold air needed to maintain the desired air temperature rise). If desired temperature rise is a constant (20 degrees Fahrenheit), the impact of air density and the specific heat of air can be reduced to a simple constant, or magic number, of 154. (The actual fluid dynamics details are beyond the scope of this article.)

Page 38: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

38

Once the total expected power draw of all the components in the rack is calculated, the total expected power needs for the rack is known. This needs to be converted into the cooling needs of the rack, and that's where the magic number of 154 comes into play (there is some adjustment required for altitude). The total rack power consumption in kilowatts multiplied by 154 provides the total airflow in CFM that is required to maintain the appropriate temperature for that equipment.

For example, if the total power consumption of a rack is 2.5 kW, then 385 CFM (2.5 times 154) is needed to take 55-degree air and raise it 20 degrees to 75 degrees Fahrenheit (24 degrees Celsius), which is the typical CRAC return air temperature setting. So what tile perforation -- measured in perforation percentage -- is needed? It depends on many factors, and this is why airflow modeling is required.

There's no easy way to determine tile airflow output. Either the airflow is measured with an air volume meter or the airflow is modeled. And knowing what the flow is at a specific location does not identify the flow even one tile over. Given the broad set of influencing factors, what is sufficient at one tile may be completely wrong on the adjacent tile.

Get the airflow low and wrong and the servers will run hot, which could affect equipment reliability. Get the airflow high and wrong and you're wasting money that could be otherwise used to value-adding activities. Wasted airflow in one spot means that equipment in another location may not receive the cooling it needs.

It is also possible that the results will show that not enough CFM are available to be delivered through a floor tile -- after all, a hurricane can't be pushed through a keyhole. In those cases, alternative cooling techniques are needed. These include the use of water, which is up to 4,000 times more efficient than air in removing heat; point cooling solutions, which cool from above or the sides of racks; and rear-door and side-car heat exchanges. These solutions can remove as much 70 kW of heat from a single cabinet and can help to dramatically reduce data center floor space requirements through significant improvements in equipment density. Take steps toward efficient data center design With this information, data center managers today can walk through their facilities and quickly identify 15% tactical power savings through bypass airflow mitigation. A more strategic set of activities is to model the data center's airflow and redesign the raised-floor layout by appropriately placing perforated tiles based on the numbers. For financially strapped IT organizations (which organization isn't in these economic times?), not optimizing rack cooling is tantamount to IT malpractice.

Page 39: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

39

United Parcel Service's Tier 4 Data Center Goes Green

By: Matt Stansberry

Let's face it: A lot of green data center case studies are pretty worthless. Vendors and

customers pat one another on the back for buying green products and offer vague

promises to save energy in data centers over a period of time.

But the facilities department at the United Parcel Service of America Inc.'s Alpharetta, Ga., site are about to save you a lot of money on your data center air-conditioning bill today. Joe Parrino, data center manager at UPS' Windward data center also explains his organization's load-shedding process and proves that using outside air to cool a data center can work—even in the hot temperatures of the southeastern U.S.

Brown goes green in the data center UPS' Windward data center bucks the conventional wisdom. Old data center facilities are supposed to be inefficient, and outdated mechanical systems are primarily to blame. Even worse, considering the amount of redundancy designed into the facility to prevent downtime, an Uptime Institute Tier 4-rated data center would have to be a real energy hog.

But somehow the 13-year-old, Tier 4 facility in Alpharetta scores a power use effectiveness (PUE) as low as 1.9 or, in the Uptime Institute's parlance, SI-EER. This ratio represents the measure of the power going into the facility at the utility meter divided by the power going to the IT load, measured either at the power distribution unit or uninterruptible power supply.

In the case of the Windward data center, PUE was measured at the output of the uninterruptible power supply; measuring the output of the PDU was too difficult. For a more detailed discussion of the differences in measuring at the power distribution unit versus at the uninterruptible power supply, listen to the podcast "Where to measure IT vs. infrastructure power use: PDU or UPS?" with Pitt Turner.

According to the Uptime Institute, the average ratio is 2.5. This means that for every 2.5 watts going "in" at the utility meter, only 1 watt is delivered out to the IT load. In this regard, United Parcel Service's Windward data center is way ahead of the curve. But how did the company do it?

Cutting out the air handling units Forced-air cooling is one of the least efficient systems in data center infrastructure, and wasting cold air is the most common mistake in data center management. You can set up hot aisle/cold aisle, install blanking panels, and seal gaps in the floor, but you've probably still wasted cold air in a place you wouldn't expect: the perforated top of power distribution units.

Parrino's staff learned this by chance. The team noticed the perforated roof on a PDU as it sat in a hallway waiting for installation. They took airflow measurements on several

Page 40: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

40

installed units using a velometer and calculated the cubic-feet-per minute (CFM) loss (i.e., the velocity of the air multiplied by square footage of the opening). United Parcel Service determined the units lost 2,000 CFM per PDU.

"What heats up inside a PDU that would require 2,000 CFM of cooling?" Parrino wondered. The only component possibility was transformers, which have a high temperature tolerance. So Parrino conducted an experiment. He ran a PDU with a solid Lexan cover at full load (i.e., 180 kW) using a load bank for one hour in an outside location on an 85 degree Fahrenheit day. Measurements of the transformer temperatures were taken with an infrared camera. The transformer temperatures increased 20 degrees from the nominal 115 degrees on the conditioned raised-floor space to about 135 degrees in a non-air-conditioned location. This was well within the manufacturer's stated 300-plus degrees Fahrenheit operating range. "We didn't even come close to the shutdown temperature," Parrino said.

The next step was to seal the top of PDUs with Lexan covers. Parrino hired a contractor to install covers on all the units. The covers have a three-inch opening to ensure that the transformers get airflow but also block 90% of undesirable bypass airflow. Following the installation of the Lexan covers, average transformer temperature increase was around 1 degree to 2 degrees Fahrenheit.

"After we installed the covers, we looked at the under-floor static pressure and we were amazed at what we got back," Parrino said. The data center had 62 PDUs that were wasting 124,000 CFM of cold air. With the covers installed, Parrino estimated that he could shut off six computer room air handlers [CRAH] based on measured airflow of 19,000 CFM per CRAH unit. In reality, he shut off 10.

The cost of covering PDUs was about $6,000, and United Parcel Service estimated that payback would take about 4.3 months. Instead the project actually paid for itself in only a month and a half.

Parrino said he plans to implement variable frequency drives on some of Windward's CRAH units, and his team is experimenting with variable air volume floor grates controlled by intake temperatures of the racks. "This will slow the consumption of CRAH fan energy even further by delivering the CFM that's needed for each rack instead of delivering based on the worst-case IT load," Parrino said.

Page 41: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

41

Green UPS Tier IV Data Center Water-Side Economizers

By: Matt Stansberry

Free data center cooling in Georgia The Windward data center has two 1,000-ton centrifugal chillers and two 800-ton absorption chillers. The data center also has a 650,000-gallon thermal storage tank with redundant water sources (a well and city water for backup). The thermal storage tank was designed to provide about 20 hours of emergency cooling, but Parrino's team also uses it for energy cost management.

In 2000, United Parcel Service installed a plate-and-frame heat exchanger to take advantage of outside air temperatures to cool its chilled water. Also known as water-side economizing, the practice saves energy by allowing data centers to turn off chillers, and green data center experts have given it a lot of attention lately.

Unfortunately, most people don't take advantage of free cooling because they aren't in a region that stays cold long enough for the system to pay for itself or lack the automation to manage going on and off the plate-and-frame heat exchanger. But UPS has solved both of those problems.

For starters, Parrino's staff raised the temperature of the chilled-water loops from the designed temperature of 45 degrees Fahrenheit. It now modulates between 52 degrees and 58 degrees Fahrenheit. The lower temperatures are needed during high-humidity days (i.e., 100% humidity when it rains) to maintain the interior relative humidity between the nominal 40% to 55%. During the winter months when the outside air is drier, Windward can use higher temperatures.

Further, getting on and off the plate-and-frame heat exchanger is easier with a thermal storage tank. As the chiller shuts down and a condenser water loop is lowered, the thermal storage tank provides uninterruptible cooling to the data processing equipment.

Windward is somewhat of a mixed-use facility – about 125 people (support personnel) work at the data center. The increase in chilled water temperatures has not affected human comfort in any way.

Higher chilled-water temperatures enable United Parcel Service to extend its use of free cooling dramatically. In 2007 the data center got used the plate-and-frame heat exchanger for the last time on May 18 and switched free cooling back on for the first time on Oct. 11. During this seasonal transition period, nighttime temperatures get low enough but the days are warm. To extend time on the plate-and-frame heat exchanger even further during the seasonal transition periods, the thermal storage tank provides ride-through during the warm afternoons and is then re-charged during cool evenings. The entire process is automated; no human intervention is required.

Page 42: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

42

As of late November 2007, Windward is on plate about 90% of the time and will remain that way through the better part of April. That's five months of free cooling—in Atlanta, no less. Northern latitudes should enjoy even longer free cooling periods.

Based on operating 73 days per year, the plate-and-frame heat exchanger project was projected to pay for itself in two and a half years, saving 4 cents per kilowatt-hour. It saved $88,000 annually. The winter of 2000 was especially cold, and in the first year of operation, the project paid for itself .

According to Parrino, switching on and off a plate-and-frame heat exchanger would be a messy job without thermal storage tank and solid automation software. He says the Windward building automation system from Kennesaw, Ga.-based Automated Logic Corp. is a story in itself.

In 1995 the system was installed with the building, and the plan was to bring all building systems under a single interface. "It was a pretty advanced system in 1995, even more so today," Parrino said. "Manufacturers want to give you a PC for your UPS system, one for the generator or a chiller. Our system interfaces with all of these third-party devices."

The system provides chiller, pumps, and cooling-tower rotations and manages the thermal storage tank. It also gives Windward visibility into outside air temperature and humidity conditions to determine when a data center can use outside cooling.

How peak-load shedding gets done Automated Logic's system also helps United Parcel Service shed its power load during high-demand peaks in the summer. United Parcel Service is on a real-time pricing plan with its utility Georgia Power. The price can range from 4 cents per kilowatt-hour in the morning and jump to 8 cents in the afternoon when there are moderate summertime temperatures. Costs for afternoon peak-load times in the month of August exceeded 30 cents per kilowatt-hour on days when the outside temperature surpassed 100 degrees Fahrenheit.

In order to minimize costs, Windward switches to "plant economy" mode during the summer peak-load periods. Plant-economy mode effectively shuts down the 630 kW chiller plant (including chillers, cooling tower fans, primary pumps, tower pumps) and cools the data center using the stored 45 degrees Fahrenheit thermal energy in its 650,000-gallon thermal storage tank. The facility then runs chillers at night when the cost per kilowatt-hour is around 4 cents—recharging the tank during off-peak hours.

Running the chillers at night is an effective energy reduction strategy as well, since outside wet-bulb temperatures are typically lower than they are during daytime hours. A lower wet-bulb temperature allows more efficient removal of heat via cooling towers. This reduces the condenser water temperature, also reducing "lift" in the chiller and enabling it to run more efficiently.

The thermal storage tank only used to provide six to eight hours of cooling capacity on a 95 degree day, with 40% thermal capacity remaining. Raising the chilled-water set point

Page 43: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

43

has increased the ton-hours capacity to approximately 16 hours of cooling with 50% thermal capacity remaining. The tank is never completely discharged so it can continue to provide thermal backup in case of a chiller problem. Finally, it removes stress on Georgia Power's peak power capacity infrastructure.

Eco-awareness at United Parcel Service It's not clear that companies are in a green mood. In "IT priorities in 2008: A truly new year," SearchDataCenter.com reported on a broad survey of TechTarget Inc. members. Results indicated that in 2008, green computing remains a minor initiative. For the moment, many companies have simply fed the energy-consumption beast by building new data centers to provide additional raw power for an increasing number of servers.

United Parcel Service is a notable exception. According to Parrino, the company's founder, James Casey, embedded the principle to always be "constructively dissatisfied" and constantly to seek opportunities to improve efficiency. United Parcel Service has actively gone green in its data center and views its efforts with a broader impact on the environment.

"When you look at electrical costs of $100,000 a month in our $47 billion company, it's not a lot of money for one building," Parrino said. "But going beyond that as a good corporate citizen, UPS has learned to manage the consumption of energy in all aspects of their business. The data center is no exception. Energy sources in the Southeast are plentiful, but not necessarily renewable. As a renewable generating source in Georgia, wind is not good, solar is marginal, and geothermal and hydro simply aren't available. Typical electrical generating sources are 50% coal, and 50% natural gas."

United Parcel Service also equates energy efficiency with increasing the useful service life of the data center. "All these data centers around the country are running out of power and cooing and are having to expand," Parrino said. "Becoming energy efficient is a great payback when you don't have to expand into additional infrastructure."

Page 44: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

44

Data Center Hot-Aisle/Cold-Aisle Containment How-Tos

By: Mark Fontecchio

Though data center hot-aisle/cold-aisle containment is not yet the status quo, it has

quickly become a design option every facility should consider.

Server and chip vendors packing more compute power into smaller envelopes has caused sharp rises in data center energy densities. Ten years ago, most data centers ran 500 watts to 1 kilowatt (kW) per rack or cabinet. Today densities can get to 20 kW per rack and beyond, and most expect the number to continue to increase.

Data center hot-aisle cold aisle containment can better control where hot and cold air goes so that a data center's cooling system runs more efficiently. And the method has gained traction. According to a SearchDataCenter.com's survey of data center managers last year, almost half had already implemented the technology or planned to last year. But there are several considerations, and various questions that data center managers should ask themselves:

Is containment right for you? Should you do hot-aisle containment or cold-aisle containment? Should you do it yourself or buy vendor products? What about fire code issues? How do you measure whether containment actually worked as hoped?

Do you need hot/cold aisle containment?

First, a data center manager needs to decide whether hot-aisle/cold-aisle containment is

a good fit for his facility. Dean Nelson, the senior director of global data center strategy

at eBay Inc., said it's not a question for his company, which already uses the method

But as Bill Tschudi, an engineer at Lawrence Berkeley National Laboratory who has done research on the topic, said, it's all about taking the right steps to get there.

"You can do it progressively," he said. "Make sure you're in a good hot-aisle/cold-aisle arrangement and that openings are blocked off. You don't want openings in racks and through the floors."

These hot- and cold-aisle best design practices are key precursors to containment, because when they're done incorrectly, containment will likely fail to work as expected.

Containment might not be worth it in lower-density data centers because there is less chance for the hot and cold air to mix in a traditional hot-aisle/cold-aisle design.

"I think the ROI in low-density environments probably won't be there," Nelson said. "The cost of implementing curtains or whatever would exceed how much you would save."

Page 45: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

45

But that threshold is low. Data centers with densities as low as 2 kW per rack should consider hot-aisle/cold-aisle containment, Nelson said. He suggests calling the utility company, or other data center companies, who will perform free data center assessments. In some cases, the utility will then offer a rebate if a data center decides to implement containment. Utilities have handed out millions of dollars to data centers for implementing energy efficient designs.

Hot aisle containment or cold aisle containment?

Next up for data center managers is deciding whether to contain the hot or the cold aisle.

On this score, opinions vary. For example, American Power Conversion Corp. (APC) sells a

pre-packaged hot -aisle containment product. Liebert Corp. sells cold-aisle containment.

Containing the hot aisle means you can turn the rest of your data center into the cold aisle, as long as there is containment everywhere. That is how data center colocation company Advanced Data Centers built its Sacramento, Calif., facility, which the U.S. Green Building Council has pre-certified for Leadership in Energy and Environmental Design (or LEED) Platinum status in energy efficiency.

"We're just pressuring the entire space with cool air where the cabinets are located, said Bob Seese, the president of Advanced Data Centers. "The room is considered the cold aisle."

This approach includes concerns that when contained the hot aisle might get too hot for the IT equipment and uncomfortable for people to work in the space. Nelson, however, said that as long as there's good airflow and the air is being swiftly exhausted from the space, overheating shouldn't be a problem.

Containing the cold aisle means you may more easily use containment in certain sections of a data center rather than implementing containment everywhere. But it also requires finding a way to channel the hot air back to the computer room air conditioners (CRACs) or contending with a data center that is hotter than normal.

Cold-aisle containment proponents cite the flexibility of their approach. Cold aisle can be used for raised-floor and overhead cooling environments. Cold-aisle advocates also say that containing the cold aisle means you can better control the flow and volume of cool air entering the front of the servers.

Then, of course, data centers could contain both the hot and cold aisles.

Do-it-yourself methods vs. prepackaged vendor products

There are many ways to accomplish data center containment. If a company wants, it can

hire APC, Liebert, Wright Line LLC or another vendor to install a prepackaged product.

This may bring peace of mind to a data center manager who wants accountability should

containment fail to work as advertised.

Page 46: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

46

"They're good if you want someone to come in and do the work," Nelson said. "You can hire them."

But these offerings come at a price. Homegrown methods of containment are often cheaper and, if done correctly, are just as effective as vendor-provided approaches. Nelson and Tschudi said they prefer do-it-yourself methods because of the lower cost.

If a data center staff does undertake data center containment strategies themselves, there are various options. Some data centers have installed thick plastic curtains, which can hang from the ceiling to the top of the racks or on the end of a row of racks, or both. In addition, a data center can build something like a roof over the cold aisles or simply extend the heights of the racks by installing sheet metal or some other product on top of the cabinets. All these structures prevent hot and cold air from mixing, making the cooling system more efficient.

Fire code issues with hot/cold aisle containment

Almost every fire marshal is different, so getting a marshal involved early in the process is

important. A data center manager must know what the local fire code requires and

design accordingly, as hot-aisle/cold-aisle containment can stoke fire-code issues.

"The earlier you get them involved, the better," Tschudi said.

A fire marshal will want to ensure that the data center has sprinkler coverage throughout. So if a data center has plastic curtains isolating the aisles, they may need fusible links that melt at high temperatures so the curtains fall to the floor and the sprinklers reach everywhere. In designs with roofs over the aisles, this may require a sprinkler head under the roof.

"We made sure we could adapt to whatever the fire marshal required," Seese said.

Measuring hot/cold containment efficacy

It's also crucial to determine whether containment has worked; otherwise, there's no

justification for the project.

Containment benefits can reverberate throughout a data center. If hot and cold air cannot mix, the air conditioners don't have to work as hard to get cool air to the front of servers. That can mean the ability to raise the temperature in the room and ramp down air handlers with variable speed drive fans. That in turn could make it worthwhile to install an air-side or water-side economizer. Because the data center can run warmer, an economizer can be used to get free cooling for longer periods of the year.

Experts suggest taking a baseline measurement of a data center's power, which compares total facility power with the power used by the IT equipment.

Page 47: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

47

Nelson said that one of eBay's data centers had a power usage effectiveness rating of more than 2, which is close to average. After installing containment in his data center, eBay got the number down to 1.78.

"It was an overall 20% reduction in cooling costs, and it paid for itself well within a year," he said. "It is really the lowest-hanging fruit that anyone with a data center should be looking at."

Page 48: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

48

Cleaning Under the Raised-Floor Plenum: Data Center Maintenance Basics

By: Robert McFarlane

It's worse than cleaning out the garage. It's cramped, stuff is piled and tangled everywhere, and what's worse, a lot of it is alive! And like a bunch of snakes, you're afraid that if you touch something, it will bite -- only this "bite" might be an outage that could cost you your job.

But it has to be done. Just try blowing through a tangled mass of limp spaghetti and you'll get some idea of what your air conditioners are facing if you're one of the majority whose under-floor looks like a multi-colored pasta dish.

There's a lot of power wasted when fans try to move huge volumes of air through that maze.

But it's worse than that. The air you're paying so much to cool can't get to where it's needed, so much of your expensive cooling capacity is wasted. It makes no difference how many perforated or grate tiles you put in; if the air volumes and pressures can't get to those tiles, it's like throwing a glass of water at a screen door -- what's on the other side isn't going to get very wet. And your equipment isn't going to be cooled very well either.

We know it has to be done, but how to go about it? It can be a daunting task, and

there are always so many more important things, aren't there? Actually, there really aren't, because that new high-performance hardware you're anxious to get running is going to fail if you can't cool it, so you should look at the cleanup as just another preparatory step to the new installations.

Like cleaning out the garage, you have to start somewhere. Clear out an easy area. It may be in a place that makes minimal improvement, but that's OK. Like a wise man said, "Every journey begins with a single step." Maybe you already know where abandoned cable is piled, and every cable that's easy to remove will likely make others easier to access, or at least identify.

But here comes the challenge: You really shouldn't remove more than two adjacent floor tiles at a time, and you should leave at least four in place before removing the next two. Taking out more can de-stabilize the floor and cause misalignment, which results in leaks and the waste of more of that precious air you're working so hard to preserve. While you're working, lots of air will pour through those open tiles, which can air-starve hardware in other parts of the floor. If you start to get serious overheating somewhere, you may need to limit your work in critical areas to short durations and replace the tiles until things cool down. Just remember, every step you take will make things a little bit better.

Page 49: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

49

This is not a job for one person. Someone else will have to get the "black bean" and be designated to help with this, because wires will need to be wiggled to see where they go, and you will often need to cut them in order to untangle them and pull them out. But before cutting, mark the wires! Colored electrician's tape works well for this: blue for "dead"; red for "critical -- don't disconnect"; yellow for "caution -- need to check further"; and green for "OK to unplug and re-dress." (Look in the electrical department at Home Depot or Lowes.) You may want to do more specific marking while you're at it, but that's up to you.

Then comes the challenging part. Murphy's Law virtually guarantees that the cables you most need to clean up will be mostly marked red. These require a plan, and will also mean some scheduled downtime, but that's a lot better than unplanned outages due to over-temperature failures. Work it out on paper ahead of time. And seriously consider installing an overhead cable tray ("basket-type" so the air goes through) for new cables. You can run and mark wire and glass ahead of time this way, then do an enormous amount of re-connecting during your planned downtime, which will probably be a lot shorter than it would have been.

If you must keep those cables under the floor, you need to be aware of what paths you're trying to keep clear. First is the area in front of any air conditioner. Your cabinets should be arranged in a hot-aisle/cold-aisle configuration (front-to-front and back-to-back), and your air conditioners should be at the ends of the rows, preferably aligned with the hot aisles. But no matter how things are arranged, cables should be run parallel to the air flow. Cables that need to cross the air stream should do so as far from the air conditioners as possible, and should be spread out as flat as is practical. If you use a cable tray (recommended), again, use "basket-type" and keep it relatively high under the floor if you have enough floor depth, and under the hot aisle tiles if the air flow is parallel to the cabinet rows.

While you're under that floor, make good use of a vacuum -- preferably one with a Hepa Filter so fine dirt doesn't just blow back into the room, as can happen with a standard shop vac.

We haven't mentioned the old piping that many people still have left over from those ancient water-cooled mainframes. If you're still in one of these legacy computer rooms, you probably have a punch card full of other problems, like a raised floor that's 12-inches or less in height and air conditioners added wherever they would fit. This makes cleanout even more important, because there's not much room for air flow in the first place, and air paths are probably not what they should be either.

Good luck with this project. After this, cleaning out the garage may seem easy! But the potential for improved cooling and energy savings in this case is enormous.

Page 50: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

50

Block Those Holes!

By: Robert McFarlane

Where does all that air go? One thing's for sure -- in most data centers much of it never makes it to the equipment it's supposed to cool. Lots of cold air leaks out of a multitude of openings in the floor tiles, doing virtually nothing. And a lot more disappears right in front of the cabinets after it gets out of the floor. Air conditioning is expensive, and that's a lot of wasted energy and a pile of wasted money, to say nothing of the shorter life you get from equipment that overheats.

It wasn't so critical a few years ago. Energy was cheaper and heat loads weren't as high. But with fuel costs going through the roof and heaters being shipped to data centers disguised as computers, we now have to make things a lot more efficient. The fundamentals are actually easier than you might think. In fact, basic remedies are downright simple, and pretty darn cheap compared with installing more refrigeration.

In most data centers, 25% or more of the cold air is probably being lost. There are two major places to look: your raised floor and your equipment cabinets. Let's start with the raised floor.

The biggest holes are usually the ones the cable comes through (although we've seen entire floor tiles removed, which is just complete foolishness). It used to be standard practice to just cut a 6- or 8-inch square hole, or even larger, no matter how many or how few wires needed to go through it. At one time, when mainframes used those huge "buss and tag" cables, large openings were needed to pass the oversized connectors. And since those holes were usually under equipment that was cooled from below anyway, it really didn't matter. Not so today. RJ-45's, and even the largest power plugs, will go through a much smaller hole. But an amazing amount of air will still leak through that opening, around the spaces that aren't filled with wires. Those holes have got to be sealed. There are two ways: Make some kind of seal yourself -- out of Masonite and duct tape or some such contrivance -- or use a commercial product made for the job that makes it easy to add or remove cables in the future. Two such products are the KoldLok Brush Grommet, and the Sub-Zero Pillow. Take your choice. The Pillow will seal most holes more completely, is less expensive, easier to install and adapts to a wide variety of opening sizes. The Brush Grommet comes in only a few sizes, stops most of the air but not all and can be a little pricey, but it's a lot neater, and no one can remove it and forget to put it back.

Next, look for all those places where pipes, conduits or anything else penetrates the floor. Unlike cables, which are subject to change, these things aren't going anywhere. Seal them with Fire Stop Putty or any good caulking that won't dry up and shrink. If they're too big, the fire stop manufacturers make products to go behind the putty (CableOrganizer.com, NelsonFireStop.com and a host of others). Just don't use fiberglass, mineral wool or any other product that can flake off and get into the air going to your equipment.

Page 51: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

51

Now look all around the room where tiles have been cut to the walls or air conditioners or anything else. A good quality, closed-cell weather stripping will usually seal all these openings. Lastly, look for tiles that don't seat tightly. Some air will leak through the seams between the floor tiles. That's inevitable unless the installation has been made with special products and techniques that fully seal these joints, which is highly unlikely in a data center. But the amount of leakage in a normal, well-installed floor is tolerable IF you have sealed all the other holes. If the floor is older, it may be necessary to have a raised floor contractor come in to re-level the tiles and get them as well aligned and seated as possible. After equipment is in place, however, there can only be a certain amount of improvement. Tiles trapped under equipment racks can't be moved or re-aligned, so they will determine how well adjacent tiles can be aligned. But every little bit helps.

Now let's get to the easiest, most overlooked and usually most effective way to improve cooling in the whole data center: unused panel spaces in cabinets. We must assume that your layout conforms with the accepted "hot aisle/cold aisle" approach, with cabinets oriented "front-to-front" and "back-to-back." If not, there aren't many things you can do to help except to re-orient your cabinets and change your whole layout, which is obviously not easy. But if your installation is "hot/cold aisle," you just MUST close those unused panel spaces. If you don't, the air you manage to push through your perforated tiles gets up to the first unused panel space and just flows right through the cabinet to the back. It's called "bypass air," and it does two really bad things. First, it starves all the equipment above the opening of cold air. There's always a temperature gradient from bottom-to-top that makes the upper equipment run hotter than that closer to the base of the cabinet, but if most of the cold air has escaped through the cabinet before it even gets to the top, that upper hardware is going to run much hotter and will have a much shorter life. Second, the cold air bypassing through the cabinet mixes with the hot air that must return to the air conditioners, cooling it down. That's the air that tells CRAC's how much new cold air to put out. If the return air is already cooled down somewhat, it fools the air conditioners into thinking everything is fine, so they stop working so hard. The result? Less cooling to the hardware, higher temperatures, shorter life and some strange cycling of the air conditioners than can also upset the humidity control.

And there's another factor. (Who said this was easy?) Not only can cold air bypass from front to back, but hot air can bypass from back to front. Since warm air rises naturally, this just worsens a bad situation by delivering even warmer air to the upper computers. In short, you're engaging in "computer euthanasia" simply by leaving these openings. Is it any wonder that the servers toward the tops of the cabinets statistically have a higher failure and error rate than those at the bottom? Load cabinets from bottom to top, and then close all the remaining spaces with blank panels. If you make a lot of changes, or you can't get people to pick up a screwdriver to replace the panels, several manufacturers now make "snap-in" panels. IBM and SMC make them, too, if you can ever locate them on their Web sites. There are probably others, and we know of several cabinet manufacturers who are planning to come out with them. Snap-ins are a little more expensive, but there's simply no excuse for not putting them back in when a change is made.

Page 52: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

52

Sizing Computer Room Air Conditioners for Data Center Energy Efficiency

By: Bob McFarlane

Sizing a data center air conditioner is not like choosing a refrigerator. Bigger is not

necessarily better! Correct sizing is even more critical to effective operation and energy

efficiency than right-sizing the uninterruptible power supply (UPS). But with so many

factors that determine capacity, it can be a bit tricky.

When someone plays with the thermostat at home (not you of course!), the temperature is never right. It gets too hot, then too cold. It's worse with computer room air conditioners (CRACs). The unit that's the wrong size can mess up cooling. Wrong settings or improper location will make it even worse.

Under-sizing can't cool effectively -- that's obvious. But over-sizing won't either. Thankfully, many CRACs will adjust to a range of loads, but there are many that won't. They all need to be sized realistically, but over-sizing will always result in cooling going on and off too often. It's called "short cycling," which is hard on the machine and does a lousy job of maintaining room temperature and humidity. Yes, temperature swings do hurt computing hardware!

Computer room air conditioners with refrigeration compressors -- the true CRACs -- are available in "multi-step" designs. A 20-ton, four-step unit may activate 5 tons of cooling before enabling the next step, as heat increases to 10, 15 and 20 tons of load. Chilled-water units (more properly called computer room air handlers, or CRAHs) have internal valves that adjust water flow to match the load. They usually work effectively down to about 20% of capacity. But what is capacity, and what the heck is a "ton" of air conditioning?

It's actually pretty simple, but comes from old practices (as do most crazy American measurements). Early air conditioning simply blew air across blocks of ice into the room. Melting 2,000 pounds of ice in 24 hours was defined as a ton of cooling. It happens to take 12,000 British Thermal Units (BTU) per hour (another nutty unit) to do that, so 1 ton of air conditioning = 12,000 BTU per hour.

Today we are starting to rate cooling in kilowatts (kW). A ton of air conditioning can cool about 3.5 kW of heat, so a 20-ton CRAC should cool around 70 kW. If we know our data center power loads, we can choose a unit with the right capacity: no more than 20% over-sized for a fixed-capacity unit, and, if we need growth capacity, maybe as much as 50% larger for a chilled-water or multi-step. But hold on -- there are too many different "capacities" on the data sheets. Which one do we use? We still need to know a couple of tidbits.

Air conditioners have to deal with two kinds of heat. Sensible heat -- the kind we can feel -- is what our computers give off. Latent heat is what evaporates moisture. Simplistically,

Page 53: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

53

dealing with moisture or humidity requires more latent capacity from our air conditioners, which steals from sensible capacity. There's not much reason to keep a data center above 45% relative humidity (RH), but if you over-cool you'll pull moisture out of the air (latent cooling) and have to use more energy to re-humidify. The problem is that relative humidity is "relative" to temperature. Warmer air has a lower relative humidity for the same moisture content because it can hold more vapor than cool air. Temperatures in a data center vary widely, so RH depends on where it's measured, which is why we're trying to get away from using it. However, RH is still the most common way to determine humidity.

Most air conditioners are controlled by return air temperature. Believe it or not, hotter return air enables the CRAC to provide more actual cooling capacity. So if you dial down the temperature in a heavily loaded room, you'll get less heat removal, and the place may actually get warmer -- and you'll waste a lot more power doing it! The following chart shows how humidity level and return air temperature can affect performance from the same nominal 22-ton chilled-water air conditioner. Note that at high temperatures, RH must be lower to keep moisture content below maximums.

Return air temp 72 F 72 F 75 F 75 F 80 F 85 F 90 F 95 F

Relative humidity 50% 45% 50% 45% 50% 32.3% 27.7% 23.6%

Sensible kW cooling 60.0 61.1 69.6 70.8 86.4 101.7 118.8 135.6

If you use ducts or the ceiling plenum to channel warm air back to the CRACs, and keep the return air at 80 degrees Fahrenheit or higher by preventing it from mixing with cold, you can get more actual cooling capacity from the same machine. And keeping humidity lower makes it even better at any return air temperature.

But there's more. Air conditioning takes both cooling capacity and air flow. Opening the refrigerator door won't cool the room. Air movement has to carry the heat away from the equipment and back to the CRAC, just like a nice breeze in summer. So more air should be better, right? Not necessarily. If the floor is low, too much air from too big a CRAC means higher velocity, and that means lower pressure. (That nasty physics comes into play again.) Air can actually be sucked down through perforated tiles as far as 8 to 10 feet from the CRAC, which wastes air and energy and also reduces cooling. Too much air can also create under-floor turbulence, like small tornados. That makes under-floor pressure uneven, which further reduces cooling effectiveness. And it's worse if CRACs are placed at right angles to each other. High pressures also make the CRAC fans work harder, wasting more energy.

Thankfully, today we can use variable frequency drives (VFDs) to automatically adjust fan speeds for appropriate air flow, controlled by sensors in the room. These can be retro-fit to most existing CRACs, and can save a lot of energy. (A professional computational fluid dynamics, or CFD, analysis is a good idea before buying any expensive air conditioner.)

Page 54: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

54

So Step 1 is to know your real loads, as covered in a previous article. Step 2 is to see if you can get higher temperature return air back to the CRACs. Step 3 is to decide cold air temperature. The American Society of Heating, Refrigeration & Air Conditioning Engineers (ASHRAE) Technical Committee 9.9 has recently increased the temperature envelope, so there's no need to over-cool the equipment. Step 3 is to set your humidity standard. ASHRAE TC 9.9 now recommends dew point monitoring and control, but existing CRACs may not be able to do that, so you'll still need to control relative humidity. Then, if possible, pick an air conditioner that can adjust to load and choose a sensible capacity that will operate Day 1 in its midrange. That will give the best stability and control.

Let's look at three other important issues before we finish: reheat, humidification, and water temperature. If you have more than three or four CRACs, it should not be necessary to put humidifiers on every unit. Moisture diffuses and stabilizes in the room pretty quickly (another reason for dew point sensing). Putting humidifiers on every air conditioner can be counterproductive if one unit humidifies while another de-humidifies. That's wasted energy for no better result.

Reheat was the norm for years, and it's the biggest energy waster of all. The CRAC over-cools the air and a heater warms it back up to discharge temperature. In many situations it's possible to design without reheat, or to use minimal reheat. But it takes a knowledgeable engineer to make that determination and to provide a proper design.

If you're using chilled-water computer room air handlers, you'll need to have a knowledgeable engineer involved. Published capacity ratings are based on specific entering water temperature and water temperature rise. Chiller plants today may be designed on higher numbers to improve energy efficiency, but that reduces the effective cooling capacity of the CRAHs.

Finally, don't overlook opportunities to use "source-of-heat" cooling. That's beyond the scope of this article, but the more cooling you can get near your highest heat loads, the better your cooling will be, with less energy and less capacity needed from those big CRACs. That's another big opportunity.

Keep cool!

Page 55: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

55

Can CFD Modeling Save Your Data Center?

By: Mark Fontecchio

Carl Pappalardo the IT systems engineer for Northeast Utilities, had good reason to commission a computational fluid dynamics (CFD) analysis of his company's data center: His data center's raised floor had begun to look like Swiss cheese. Shortly after building a new 15,000-square-foot data center two years ago, Pappalardo noticed that there were "a lot of holes in the floor for cable cutouts and power."

"As I walked around the room, it felt too cold, and my thought was that I wasn't using my cooling efficiently," he said. "I was thinking that all these holes couldn't be good."

He suspected that valuable cold air from CRAC units was escaping from the cable cutouts instead of coming up through only designated perforated floor tiles. So Pappalardo opted for CFD analysis to analyze the flow of fluids, which includes both liquids and gases and indicates whether cold air from air conditioners flows where it should. Performed by the data center consultancy SubZero, the CFD analysis identified the data center's hot and cold spots and ways to improve data center cooling.

Diagnosing data center airflow woes

Computational fluid dynamics has been around since the early 20th century -- often to

analyze airflow around aircraft and space shuttles for aerodynamics -- but only over the

past few years has CFD emerged as a data center issue. As the cooling infrastructure of

data centers has increased in complexity, some large end users and many data center

consultants have turned to CFD to understand server room airflow.

Last year, in a report to Congress on data center power consumption, the federal Environmental Protection Agency, recommended CFD modeling as a way to "optimize data center airflow configuration." And in a survey of end users earlier this month, the Uptime Institute reported that 47% used CFD to improve site infrastructure energy consumption.

As a result of Northeast Utilities' CFD analysis, Pappalardo realized that his data center needed some changes. In addition to penetrating the raised floor where the perforated tiles were, the air also came up through the cable cutouts. That dropped air pressure where Northeast Utilities needed it most: in the cold aisle where it could cool IT equipment. By filling up those cable cutouts, Northeast was able to increase the air pressure by 33% where he needed it. He said the move eliminated hot spots and reduced cooling costs, although the company hasn't yet done a full study of how much money it saved.

"Air mixing is the enemy," said Pete Sacco, a data center consultant and founder of PTS Data Center Solutions Inc., who uses CFD software with every room he helps design. "You need to do cooling to and from the load as efficiently and quickly as possible with as little

Page 56: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

56

mixing as possible. Eking every bit of cooling out of my investment is the most important thing I can do as a data center operator."

To boot, by doing a CFD analysis, Sacco has found construction errors such as poor sealing of data center walls, which was causing cold air to leak out.

And now some use CFD software in other areas of IT as well. Ernesto Ferrer, a CFD engineer and data center consultant at Hewlett-Packard Co., said he uses the software to help customers design data centers, but they also use CFD to model the airflow within the equipment they sell. Just as in a data center, air within a server or other piece of IT equipment should have good flow through the system. That ensures that the air cools off the electronics and leaves the box as quickly as possible.

In addition, some data center experts now use CFD technology to analyze airflow outside data centers. Why? It can help determine whether and where air-side economizers should be installed. Air-side economizers bring outside air into data centers to cool the IT equipment, and can lead to cost savings by reducing the amount of mechanical refrigeration needed.

CFD software: Price and quality

But Sacco warned that not all CFD software is made equal. He uses software from Future

Facilities, a London-based data center design company. But it comes at a steep price:

about $100,000 per licensed seat. That's about three times as much as TileFlow, software

from Plymouth, Minn.-based Innovative Research Inc. that SubZero uses in its analyses.

But Sacco said that more expensive software is worth it for him.

"For the price, for the environment it's set up for, [TileFlow] does a marginally good job," Sacco said. "But they segregate calculations between below and above the raised floor, and it doesn't always come up with accurate readings."

Sacco also considered Flovent, which is CFD software from U.K.-based Flomerics Group PLC. Flovent is good, he conceded, but is designed for a large range of applications, whereas the Future Facilities software is built specifically for data centers.

Another well-known data center CFD software product is CoolSim from Concord, N.H.-based Applied Math Modeling Inc. In the end, users need to decide which software works for them based on details, usability and price.

Renting vs. buying CFD tools

Many data center pros are unwilling to shell out the money for the software as well as

for the training to learn how to use CFD tools. Most will just hire a consulting firm to do

the work.

But some end users license the software themselves or have plans to. Many are large organizations – banks and financial firms, for example – that can afford the software and the staff to learn it. Amsterdam-based ABN AMRO bank is one such user.

Page 57: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

57

Allan Warn, a data center manager at ABN AMRO, said that over the past eight years the 15,000-square-foot London data center he oversees grew in power from 300 kilowatts to 1.3 megawatts, all in the same footprint. Last year the company had a CFD analysis done and discovered that modeling supported what it already knew: where the hot spots were and that the company had done a decent job of putting IT equipment in the right place so it wouldn't overheat.

For future modeling, ABN AMRO will buy Six Sigma software. They'll drop a bunch of hypothetical servers into a hypothetical CFD data center and gauge the outcome, then play around until it's clear that a new deployment of servers won't burn up the data center.

"We want to use the tool to tell us exactly where to put the equipment without overloading it," he said. "We want to be able to put it in and be able to cool it, not cook it."

Page 58: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

58

When Best Practices Aren't: CFD Analysis Forces Data Center Cooling Redesign

By: Mark Fontecchio

Data center best practices are supposed to be exactly that: best practices. But for Lab 7D,

a 7,000-square-foot data center that networking giant Cisco Systems Inc. runs in San

Jose, Calif., for testing and quality assurance, best practices were anything but.

Lab 7D is a busy place. Engineers perpetually load and unload equipment in and out of the room to test features of Cisco's MDS storage area networking (SAN) switches. Some 100 engineers work among the approximately 500 IT equipment racks, and each one is responsible for a particular feature of the switch, such as the ability to write to two hard disks simultaneously.

The amount of equipment turnover and the number of bodies in the room combine to make Lab 7D an atypical data center. But like many data centers, it is also running out of power. According to Chris Noland, who oversees the facility, the lab was the No. 2 consumer of electricity on the San Jose campus, generating $150,000 a month in power costs, or $1.8 million a year.

"When we found out how much we were using, we told the general manager of the group and he said, shut off power wherever you can," Noland said. "So it was more of a monetary thing."

As a first step, they shut off redundant power supplies, which were deemed unnecessary in a testing environment. For the same reason, the data center has no uninterruptible power supplies (UPSes). Those steps saved the data center 10% in energy costs, but Noland still sought additional savings.

Exploring hot-aisle/cold-aisle containment

Cisco's data center was already set up in a hot-aisle/cold-aisle configuration, complete

with perforated tiles in the cold aisle and, in Cisco's case, ceiling vents in the hot aisle.

Looking to improve on this setup, Noland talked to Pacific Gas & Electric, the main utility

company in San Jose, and the Lawrence Berkeley National Laboratory about cold-aisle

containment.

Hot- and cold-aisle containment has gathered steam as a way to isolate the hot- and cold-air streams in a data center, which in theory make cooling the IT equipment more efficient. But there is some debate about whether hot/cold-aisle containment is a best practice.

In Lab 7D, there are seven 30-ton computer room air conditioners (CRACs) supplying cold air to the equipment. Noland walked around the lab and noticed that some of the CRACs operated at 100%, while others operated at just half that. He figured that if the room

Page 59: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

59

were designed correctly, for every two CRACs operating at 50%, he should be able to shut one off. Noland wanted to make sure that air got where it needed to go and figured that cold-aisle containment could help his cause.

Noland also considered implementing hot-aisle containment and installing blanking panels in the IT equipment racks. In addition, the lab was running a homegrown program that shut off unused IT equipment at night.

Data center simulation time

Before implementing hot-aisle/cold-aisle containment, Noland decided to run some

simulations. He called in Future Facilities, a software company that runs computational

fluid dynamics (CFD) airflow simulations in data centers.

Noland was unhappy with the results.

"To be honest, I was a little upset with Future Facilities," Noland says, only half-joking. "I just wanted [them] to confirm that we were right."

Future Facilities' CFD analysis found that the lab's CRAC units didn't supply enough air for the equipment. As a result, a good deal of the IT equipment took in air from other IT equipment's exhaust air and created a lot of air mixing. These conditions meant CRACs had to pump out much colder air than was necessary and wasted energy.

So by itself, cold-aisle containment wasn't an option. By isolating that air stream, some equipment in that aisle – which would normally take in air from the exhaust of IT equipment in other rows – would be short of cool air and overheat .

"The IT equipment required about four times more cubic feet per minute (CFM) than was available," said Sherman Ikemoto, the North American general manager of Future Facilities. "Chris Noland was unaware of this situation."

For similar reasons, the Future Facilities software found that hot-aisle containment also wouldn't work. And besides, Ikemoto said that if the data center deployed hot-aisle containment in its existing lab, it would have to reconfigure the sprinkler system in accordance with the fire code. That could cost up to $150,000.

Not only was hot/cold aisle containment a bad idea, but the CFD analysis showed that even the hot/cold aisle configuration and blanking panels were falling short. While servers and most other IT equipment have a front-to-back airflow, Cisco equipment intakes air from just about everywhere – from the front, the back, the sides, and even the top and bottom .

"They are the ultimate recyclers," Noland said. "They will use air from everywhere to cool the equipment."

Page 60: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

60

The limits of hot-aisle/cold-aisle containment

The only best practice that is bound to work in the Cisco lab is shutting off equipment at

night. Other techniques can be used in a limited capacity.

Noland has begun setting up a new lab that he will configure as follows: Any IT equipment that has front-to-back airflow will have its own dedicated area within the data center. That portion of the lab will use blanking panels, a hot-aisle/cold-aisle configuration, and hot-aisle/cold-aisle containment.

Equipment with side-draft and other airflow-intake designs, on the other hand, will sit near the center aisles of the labs and run without any of these so-called best practices.

"We're still looking at developing best practices for side-draft," Noland said. "We're looking at some venting options. There are also some rack options which essentially turn side-draft into front-to-back flow, but the only thing is that takes up space."

Noland may have been partly joking when he said he wasn't happy with the CFD results. But in the end, the simulations helped to "show us the light and turn around a couple schemes. It's unfortunate, but it's the truth."

Page 61: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

61

Energy-efficient Backup Power and Power Distribution

Which Data Center Power Distribution Voltage Should You Use?

By: Julius Neudorfer

Designing a data center's power system consists of numerous decisions about the components in the power path. In most of the world, there are two primary voltage schemes (three-phase) available, which are based on either the North American 480/208/120 V (600/208/120 in Canada) or the 400/230 V system in used Europe and some parts of Asia. In all systems, much higher voltages are used to deliver power from the utilities to the site, but these are not part of this discussion. Also note that we are generically referring to the 400/230 V system (this is the midpoint voltage that represents 380/220 V through 415/240 V). While some data centers are exploring the use of direct current (DC) to improve overall efficiency of the entire computing ecosystem, alternating current (AC) power is still the predominant form of power in the data center. (Follow this link for more on the AC/DC data center debate.)

Rack-level power density and distribution Rising data center power density is one of the big factors driving the re-examination of voltage choice to IT equipment and which voltage to use in distribution systems.

Here in the North America, the common use of 120 V worked fine when a rack used 1-2 kW per rack and a single 20 A circuit was all that was needed (two for A-B redundancy). With the advent of blade servers, which typically require 208 V or 230 V circuits and use five or more kilowatts as well as racks full of 1U servers, the new baseline is now 5 kW per rack.

Ten to 20 kW is not uncommon anymore, and even 30 kW or more is not unforeseeable (we can provide power at these levels, but cooling is a much greater challenge). Moreover, almost all IT power supplies are now autosensing and universal voltage-capable (100-250 V) to allow the same product to operate worldwide. In fact, they are also more efficient at 208 V or 230 V than at 120 V (or even lower at 100 V in Japan).

We can increase the power delivered to each rack by increasing the voltage (or amperage) and also by running three-phase power to the racks. The diameter of the cable determines its "ampacity," or the number of amperes it can safely carry (and its cost). The voltage that is used will determine how much power can be delivered at different voltages over the same conductor size (example: 12-gauge wire, typically used for 20 A feeds for distances of up to 50 feet).

Note that under North American Electrical codes, the branch-level circuit breakers are 80% rated, so that only 16 A can be delivered to the load.

Page 62: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

62

Branch circuits: Single-phase power distribution

Amps Voltage KVA Three conductors

16 120 1.9 L1 + N + G

16 208 3.3 L1 + L2 + G (across any two of three phases)

16 230 3.7 L1 + N + G

Branch circuits: Three-phase power distribution

Amps Voltage KVA Five conductors

16 120 5.7 L1 + L2 + L3 + N + G (120 V any phase to neutral)

16 208 5.7 L1 + L2 + L3 + N + G (208 V across any two of three phases)

16 400/230 11 L1 + L2 + L3 + N + G (230 V any phase to neutral)

Note that by making three-phase power available in the rack, you will increase the available power by a factor of 300%, yet increase your cable conductor count and its cost by only 66%.

Moreover, by deploying three-phase 208/120 V power to the racks, you can supply either 208 V single-phase or 208 V three-phase power and also provide 120 V for older or specialized IT gear (that may only work on 120 V).

In fact, by running three-phase power, some rack-level PDUs (aka the rack power strip) can provide 208 V and 120 V simultaneously from the same strip.

Instead of hardwiring, also consider using three-phase connectors such as NEMA "Twist-Lock" L21-20 or L21-30 (20 A/5.7 kVA or 30 A/8.6 kVA, respectively) or the larger IEC 309 40-60 connectors (also called "pin and sleeve"), or even Russell Stove for higher power. This will permit you to change power strips as your equipment changes, without rewiring. While this is somewhat more expensive up front, it can save a lot of money in the long

Page 63: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

63

run by providing an easy and lower-cost solution to moves, adds and changes during equipment upgrades.

While on the subject of rack-level PDUs, please consider using units that allow remote monitoring to prevent overloads and also allow for energy management and capacity planning.

In the 400/230 V system, all output circuits are 230 V single-phase (from any phase to neutral and ground).

Floor or row-level power distribution Depending on the size of the data center and the amount of power required (and power density), power can be distributed at 480 V or 208/120 V in North America. Assuming that you have larger installation, 480 V is the most common and preferred choice for the UPS and all major power distribution until it gets to the data center floor.

Once the 480 V power has been delivered to the floor or row PDU, it needs to be transformed down to 208/120 V for use by the computer equipment. The type of the transformer will impact its efficiency and the overall efficiency of the data center.

Transformer types and the K-Factor No discussion of power distribution would be complete without a mention of transformer types and the "K-Factor."

In a 400/230 V distribution system, there is no transformer required, only circuit breakers to protect the branch circuits. European sites sometimes specify a transformer in the PDU to provide addition isolation and also to mitigate the effects of phase imbalances on the upstream UPS, especially if it is a transformerless UPS.

The North American 208/120 V distribution system also does not require a transformer, only circuit breakers to protect the branch circuits. However, data center designers will sometimes specify a transformer in the PDU to provide additional isolation and mitigate the effects of phase imbalances on the upstream UPS, which is especially true if is a transformerless UPS.

As noted in the discussion of the rack-level power, the higher the voltage, the lower the current required to deliver the same power to the load. The lower amperage will also lower the size and cost of the electrical switchgear, UPS, distribution panels and copper cabling used throughout the entire system. This can amount to a significant overall savings.

Current required to deliver 300 kVA to PDU at different voltages

Amps Voltage KVA

833 208/120 300

Page 64: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

64

433 400/230 300

360 480/277 300

Conversely, here is a chart to show the effect of a fixed current capacity (such as based on existing wire size) versus voltage.

Power delivery to PDU at different voltages*

Amps Voltage KVA

400 208/120 144

400 400/230 277

400 480/277 332

* Example uses 400 A-rated circuits and feeder cabling.

This is useful if you are considering a voltage upgrade by retrofitting the distribution system and want to save money and construction time by re-using the existing cables and conduits to the PDUs. For example, converting existing feeder cables from 208 V to 480 V, you can deliver over twice the power over the same cable -- just make sure the panels and switchgear are rated for the higher voltage.

Safely hazards and voltage In North America, we commonly use 208/120 V to end-user equipment using standard plugs and receptacles. It is also a common practice for electricians to add circuits to live 208/120 V distribution panels. The 480 V service has a much higher potential for an electrical arc to occur and is therefore not considered safe for plug-in equipment. At 480 V, the danger of Arc Flash is substantially greater -- electricians require additional safety gear to work on 480 V circuits, and the possibility of service interruptions is higher due to an arc occurring during electrical work in the panel.

Will European voltages work in U.S. data centers? In Europe, only single-phase 230 V is distributed to plug-in devices via standard IEC C13- and C19-type receptacles and plugs, at up to 16 A. However, three-phase 400 V power is also commonly available via the larger IEC type 309 receptacles at up to 60 A. Also in Europe, 400 V work in the panel is commonly done (with appropriate safety gear), since that is the basis of all their power distribution systems.

Historically, which side of the ocean you occupy has dictated which voltage has been used in a given country and in data centers. However, the inside of the data center has

Page 65: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

65

become its own microcosm, sometimes independent of its location. It is clear that European data centers will continue to use the 400/230 V system since it is already native to their overall existing power system.

One of the major advantages of the 400 V European system is that there is no voltage conversion; therefore there are no additional transformers required for voltage conversion (other than the main utility transformer). This makes the entire power system potentially smaller and more efficient overall.

In North America, several vendors now offer 400/230 V products as a higher-efficiency alternative to traditional 208/120 V distribution systems. They typically use an "autotransformer" (which is smaller, lighter and more energy efficient than a traditional stepdown/isolation transformer) in the floor- or row-level PDU. This allows these PDUs to work with a standard 480 V UPS and 480 V distribution system that is carried to the PDU, and the PDU then outputs 230 V (single-phase) to the IT racks. It offers greater efficiency and a smaller footprint on the data center floor. Some vendors have created 400/230 V "touchless" modular PDU systems that shield the main buss, allowing circuit packs (breakers and cabling) to easily and safely be added and removed.

In a more advanced 400 V power scenario, the UPS would be fed at 400 V and there could be no transformer in the PDU. The main input power to the data center and all the power equipment's switchgear, generators, etc., would be 400 V; potentially it would be just like a European data center -- the North American high-utility voltage would be transformed only once to down 400 V (instead of 480 V). Afterward, there would be no need for a transformer to step down the voltage, theoretically avoiding all transformer losses and minimizing copper cabling losses while allowing the IT power supplies to operate at 230 V, at which they are the most efficient.

In so far as 400 V power taking hold in North American data centers, it may take many years or it may never take hold as a mainstream system. There is massive mindset change required by all those involved in the designing, building and operating of the data center. Moreover, the existing equipment has inertia and very few people or equipment makers may want to make such a major change in the hope of gaining a 2-5% potential increase in energy efficiency. Then again, stay tuned -- if energy prices continue to rise, it may cause everyone involved in the data center to examine every option.

Page 66: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

66

DC Power in the Data Center: A Viable Option?

Alternating current (AC) power is ubiquitous in data centers, and it's hard to change the status quo. But a direct current (DC) power demonstration project conducted by the Lawrence Berkely National Laboratory produced some interesting results: a 7% energy savings over top-notch AC technologies.

SearchDataCenter.com recently reported on an experiment in using DC power in a data center at Syracuse University, furthering the practical research into this data center power option. While total DC power infrastructure for data centers isn't quite ready, these investigations are putting the concept on the top of mind for data center professionals concerned about power consumption.

In this face-off, two power professionals from vendor companies debate the merits of DC and AC power for the data center. Rudy Kraus is CEO of Validus DC Systems, the company that is working with Syracuse University on its DC in the data center experiment. Neil Rasmussen is Senior Vice President of Innovation at APC, one of the leading providers of AC power equipment.

Rudy Kraus: DC the way to go

Neil Rasmussen: AC here to stay

The advantages of DC power in the data center

By Rudy Kraus, CEO, Validus DC Systems LLC

Direct current is the native power resident in all power electronics. Every CPU, memory chip, disk drive, etc., consumes direct current power. Alternating current was chosen as a power path based on criteria set 100 years ago, 50 years before power electronics existed.

By leveraging -575/-380/-48 VDC, organizations can achieve numerous benefits over a traditional alternating current design. These include energy efficiency, reliability, a smaller footprint, lower installation and maintenance costs, scalability, easier integration of renewable energy, utility rebates and credits, and safety.

The energy efficiency of DC systems is a measurement of the end-to-end system efficiency, not merely a single component within the system, which is how organizations pay for power. Total energy savings can reach upward of 30% for both mechanical and electrical power savings. Because of this efficiency, DC systems can use various utility rebates and credits available for corporations. In addition, there are EECs and RECs available that further make the choice a solid business decision.

Page 67: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

67

There are fewer power components in a direct current system, making it more reliable than an alternating current (AC) system, because there are fewer pieces to fail. With fewer power conversions, there is also less heat to affect the electronic equipment. A direct current system does not have sine waves or frequency to synchronize across multiple sources, which eliminates multiple points of failure and greatly simplifies the system.

Because a direct current power infrastructure has fewer components, it requires a smaller physical footprint than a traditional alternating current design. In addition, the bulk rectifiers can be sized up to 2.5 MW is a single unit, allowing a denser power factor over AC topologies. This space savings can amount to hundreds of millions of dollars saved over the life of a system. DC systems are designed to accommodate modularity to support a scalable infrastructure growth strategy.

With fewer components, the initial equipment and installation costs are less expensive than AC systems. Likewise, the overall maintenance costs are reduced by up to 50% over an alternating current system.

The majority of renewable energy sources generate direct current power, so there is no need to add multiple conversions to accommodate an alternating current power path. The energy sources can provide their direct current power straight to the power path, eliminating inverters and saving significant amounts of energy.

By utilizing simple power electronics and appropriate DC circuit breaker technology, DC systems can be deployed that are safer than AC systems from 380 to 600 volts. When using 48 VDC power, the system is considered "touch safe" from a code perspective, safer than 110/220 VAC systems.

The world's most reliable platforms already run on direct current. Nuclear submarines, aircraft carriers, data centers (UPS systems are backed up by DC battery strings), manufacturing facilities and telecommunication centers all run direct current. The long-term proven track record of direct current combined with the numerous financial and reliability aspects make it a natural choice for expanding or new data center environments.

AC power will continue in data centers because of convenience By Neil Rasmussen, Sr. Vice President of Innovation, APC Power

Why is AC power use in data centers preferable to DC? There is no real choice between AC and DC power today because over 98% of available IT equipment can run only on AC power. The use of DC power in data centers is very small and has actually declined significantly over the past decade.

Nevertheless, there are a number of proposals for the industry to consider adopting a new DC distribution standard based around 380 V or 500 V, which claim to offer

Page 68: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

68

advantages over existing AC systems. Some telecom providers and cloud computing providers plan to build demonstration DC data centers, where they will have IT equipment customized for them to run on DC. Hopefully this will allow the industry to gain experience with DC over the next 10 years.

While the question of DC versus AC is of academic interest, AC is the only practical option for data centers in the foreseeable future. AC is ubiquitous, it is increasingly efficient, it is proven, it works, and it is certainly not going away.

What are the disadvantages of using DC power? In 2006, a demonstration project reported a 7% efficiency advantage for DC power in the data center when compared with AC. This created a lot of interest in DC, because 7% represents a significant amount of energy. However, since then there have been huge advances in the efficiency of AC power systems for data centers, which, according to reports from The Green Grid, have effectively negated any expected efficiency advantage for DC.

For example, AC power supplies for servers that had 30% energy loss a few years ago are now required to have less than 8% loss to achieve the Energy Star qualification. This is a reduction of over 70% in losses. UPS systems have also recently demonstrated remarkable improvements in efficiency.

Proponents of DC power rightly criticize the inefficiency of historic AC systems, but they ignore the reality that high-efficiency AC systems are already available now. For a new installation, a complete AC power system can be over 96% efficient, which is just as efficient as hypothetical DC designs. Even if DC has no efficiency advantage, there are other analyses that suggest DC might ultimately be more reliable or even less expensive than AC, but these are purely hypothetical until there is significant experience with actual DC data centers.

Why can the potential risks and expenses associated with using DC power in a data center not be mitigated enough to make DC a truly viable data center power choice? The risks and expenses of DC today are indeed quite high. In addition to the special safety risks of high-voltage DC with respect to electrical arcs and fire, one of the most difficult issues is the business risk of trying to plan a hybrid data center that provides both AC and DC during any transition period.

There are also major risks associated with the evolving standards relating to DC, which might make early DC designs obsolete or even illegal before the end of their useful life. But these problems are not a fundamental property of DC, they are due to the immaturity of regulations, lack of industry competencies and immaturity of products related to DC.

The key question for DC is why any early adopters would be interested in taking on the large risks and expenses of a DC transition, given that the efficiency advantages are now known to be very small or zero. Practical data center operators interested in efficiency

Page 69: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

69

will use high-efficiency AC systems and focus their efforts on improved cooling plants, where substantial inefficiencies still exist.

Page 70: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

70

The Value of DC Power in Data Centers Still in Question

By: Mark Fontecchio

Using direct current (DC) power in data centers has long been an option, but the jury is

still out on whether the downsides are worth it.

This year, the industry is studying whether a direct current power topology is more energy efficient than traditional alternating-current (AC) power, and if so, whether the potential risks and expenses associated with it can be mitigated enough to make DC a truly viable data center power choice. The Green Grid, a nonprofit focused on data center energy efficiency, for example, has taken a closer look at the use of a DC power, and Syracuse University has begun to use DC-powered computing in a data center facility.

Direct current power has become more attractive because it can improve data center energy efficiency. DC is a kind of electrical current that travels through a circuit in only one direction, whereas alternating-current power is an electrical current that frequently reverses direction. In a DC-powered system, there is only one conversion: from AC to DC. And with fewer conversions, there are fewer opportunities for power and energy loss.

DC-powered servers also don't need power supplies for extra conversion and thus save data center space. But the downside of DC power is that it can require much larger wires to carry the current, thus creating power buildup and arcing that can be endanger IT equipment and staff.

DC power has long been used in the telecommunications industry because high-end PBX equipment runs on DC. But until recently, other enterprise data centers haven't considered running on DC power. Now, however, the thinking has begun to change.

"One of the things that is driving renewed interest is that you're seeing more of the telecom companies getting into the IT space and the data center space," said John Pflueger, a technical committee chairman at the Green Grid and a technology strategist at Dell Inc. "So you're seeing a natural wish for telecoms to extend their architectures into the data center. I think that's helping to keep some of the issues on the table in reference to direct current."

Putting DC power in the data center

For Syracuse University, going to DC power hasn't been as much of an adjustment as its

CIO, Christopher Sedore, initially expected. The university has carved out a portion of its

new data center facility for DC-power computing and plans to measure its use. Last year,

the school built a 4,000-square-foot data center with 500 kilowatts of power capacity. It

carved out 150 kilowatts of that for a DC-powered distribution. Right now the center's

IBM System z10 mainframe -- used for teaching and research -- runs on DC power. A

rectifier converts the AC power to DC power, and the facility uses power distribution

equipment from Validus, a DC-power components company.

Page 71: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

71

"It doesn't look dramatically different from AC," Sedore said, adding that the university has in-house electricians who know how to deal with a DC power infrastructure.

The university is still installing all various instruments that will measure the facility's energy efficiency. In addition to having a DC power area, the facility is testing back-of-the-rack liquid cooling, microturbine generation, and other cutting-edge designs.

A lack of DC-powered data center gear

But despite the benefits of DC power, Sedore acknowledges a hindrance -- in addition to

safety risks-- to running a DC-powered data center.

"Frankly, right now there isn't a lot of DC equipment available."

Studying the efficiency of data center DC power traces back nearly four years, when Lawrence Berkeley National Laboratory (LBNL) set up a demonstration project with Sun Microsystems Inc. Researchers discovered energy savings of up to 28% compared with traditional, older AC topologies and as much as 7% savings compared with so-called best-in-class AC topologies that included highly efficient uninterruptible power supplies (UPSes) and server power supplies. In March 2008, LBNL released its final report on DC power in the data center.

The Green Grid then did a peer review of the LBNL study and concluded that energy savings were in the 4% to 6% range, which it also asserted would not be worthwhile for retrofit situations. Last year the organization released a second paper researching the efficiency of different power distributions in the data center. It considered 11 power topologies, including three DC lines, and concluded that none was the most efficient over the whole data center load range.

Roger Tipley, a Green Grid board member and an engineering strategist at Hewlett-Packard Co., added that moderately high-voltage DC power poses some safety concerns, where the power can build up and arc. But, he said, these concerns can be addressed through preventative measures such as shielding.

"Four-hundred volts DC may be more dangerous than 400-volts AC," he said.

A Green Grid technical committee will look at general issues, such as how to assess whether DC power is a good choice, to more nuanced details such as what kind of electrical connector to use.

"You will find people anywhere in the spectrum, from 'AC is fantastic' to 'DC is the next big thing,' " Pflueger said. "You can find people at both extremes and anywhere in between. In some circumstances, it can save some energy, but it's still a question of how much and whether the cost of change is recoverable."

Page 72: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

72

Does Data Center Uptime Affect Energy Efficiency?

By: Mark Fontecchio

The federal Environmental Protection Agency recently found that a data center's uptime has no statistically significant effect on its energy efficiency. But does the claim hold water?

At first blush, it would seem that uptime would hurt data center efficiency. Typically the more uptime a facility has, the more redundancy it has to build in to account for equipment failure. But that apparently is not as large a factor as other design elements.

"Tier level was not a huge predictor of energy performance," said Alexandra Sullivan, an engineer in the EPA's Energy Star program for commercial buildings. "When we looked at the data, we did not observe a significant relationship between tier and energy use."

The data was collected between March 2008 and June 2009. For the study, Energy Star looked at more than 100 data centers to determine their energy efficiency. The agency is using the information to create an Energy Star standard for data center facilities, which will be released on June 7. The software will allow companies to rate their data center's energy efficiency from 1 to 100, a scale similar to that for other commercial buildings.

The contention that a data center's uptime and energy efficiency are mutually exclusive was not a surprise to Tom Deaderick, director of Tier 3-certified OnePartner's Advanced Technology and Applications Center (ATAC), a hosting center in southwest Virginia. He said there is no reason why designing a facility for high availability necessarily hurts energy efficiency. A good design can achieve both, he maintains

"The things that we've done around energy efficiency -- none of those were engineered for tier classification," he said. "I really don't think the tier standards have a whole lot to do with energy efficiency."

The ATAC facility has energy efficiency designed throughout. Hot/cold aisles, blanking panels, perforated ceiling tiles over the hot aisles to exhaust hot air faster, grommets to further prevent wasted cooling air and neat cabling in the subfloor are all designs in the ATAC data center to make it more energy efficient.

That said, the facility also has plenty of redundancy. It sits in a room that Liebert thinks only requires one computer room air conditioner (CRAC), but ATAC installed three to help ensure high uptime.

Other factors aside from redundancy have a much larger effect on a data center's energy efficiency, which is often quantified by its power usage effectiveness, or PUE, which improves as it decreases toward 1. Capacity is one.

"There could be a Tier 2 site with a very light load, where they have a whole bunch of excess capacity and the PUE will be high," said Pitt Turner, executive director of the

Page 73: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

73

Uptime Institute. "Then there could be a Tier 4 site that has a load that is very close to full capacity, and its PUE is lower. The idle capacity is the biggest contributor."

Sometimes it can be as simple as the kind of equipment one uses, which Turner said was the second-biggest contributor to a site's PUE. For example, a Tier 2 site using outdated uninterruptible power supply (UPS) units running at 85% efficiency could have a higher PUE than a Tier 4 facility using UPSes with 95% or more efficiency.

"There are probably some pieces of the PUE calculation that could be attributed to the level of redundancy," Turner said. "But what we have found is that it is overwhelmed by partial load conditions and idle capacity."

Page 74: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

74

Will a Transformerless UPS Work for Your Data Center?

By: Julius Neudorfer

The rise of a new generation of transformerless uninterruptible powers supplies (UPSes) calls into question the advantages and disadvantages of these two types of UPSes. Note that this tip compares transformerless UPSes with three-phase dual conversion units that use an internal transformer as an integral part of the inverter system, not just an input or output transformer solely for voltage conversion.

Over the past five to seven years, the transformerless UPS has come to dominate the smaller three-phase (30 kVA and under) marketplace. These units are much smaller, lighter and lower in cost than the previous generation of transformer-based units. This type of design has rapidly moved up to the 100 kVA range and established a solid foothold up to 300 kVA units -- and, when utilized as part of multi-module systems, to 1000 kVA or more.

IGBTs and the transformerless UPS A little history first: Older UPSes were based on SCR inverter technology, which were either on or off, and required internal transformers to operate, so virtually all UPSes were once designed with transformers. This changed with the advent of insulated-gate bipolar transistor (IGBT) technology. IGBTs are the core technology underpinning the existence of the transformerless UPS. A modern IGBT-based UPS inverter uses high-frequency pulse-width modulation to re-create a nearly pure sinusoidal waveform and eliminates the need for bulky output transformers or large iron-core output filters. (IGBTs are also used in transformer-based UPSes.)

Transformerless designs began to appear in smaller UPSes in the mid-1990s and became the mainstream design by 2000. Earlier UPS designs relied on an input transformer to boost low incoming line voltages without forcing the UPS on to battery during low-line or "brownout" conditions. The newer systems also use IGBTs for more efficient AC-to-DC input conversion, which allows the DC bus and the inverter to hold a steady output voltage over a broader range of input voltage and frequency variations without going on battery.

Transformerless vs. transformer-based: What's more effective? In the data center world, traditionally only two things have mattered: reliability/availability and the proverbial five 9s. One of the primary claims made by vendors and some customers is that a transformer-based UPS is more robust and therefore more reliable.

In today's energy-conscious world, we all want a more efficient UPS, and a transformer, by its very nature, will introduce some additional losses to the system. One of the arguments in favor of the transformerless system is that transformers reduce energy efficiency. Older transformers lost 2-3% and sometimes more to the non-linear IT loads. More recently, since the advent of the TP-1-rated transformer (as well as using a high K rating, i.e., K20), this has improved to only 1.5-2%. However, in the 24/7 mission-critical

Page 75: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

75

world of the data center, efficiency is still -- and perhaps always will be -- a distant second to reliability and availability.

Of course, a theoretical analysis of technical differences is nearly meaningless without looking at the actual available products and market acceptance. Besides the technical arguments presented by both camps, it sometimes boils down to personal preference. Many times, the choice is made based on the preference of the specifying engineer or those who make the final purchasing decision. The two camps seem to closely align themselves and their choices to the different manufacturers of these systems.

So are you a transformerless "liberal" or a transformer-demanding "conservative"? Is this just "old school versus new school" thinking or are there solid differences and benefits that each design offers?

Vendor UPS offerings One manufacturer in particular, Emerson-Liebert, strongly favors the transformer-based design for its flagship line of larger UPSes, which are available up to 750 kVA as a single module. The vendor also offers a full line of transformerless systems in the lower power ranges.

"The dividing line seems to fall in the 200-300 kVA range," said Alan French, Manager of Technical Relations at Emerson-Liebert. "Below that, a substantial number of our sales of new systems are transformerless, while the larger units are mostly transformer-based. We believe that our typical large-enterprise customer wants the extra measure of reliability that a transformer-based UPS provides."

Other brands, such as Schneider Electric's APC and MGE, offer both types systems. APC offers both modular and transformerless, while the MGE division offers transformer-based systems in some of the larger systems and transformerless units in the lower (150 kVA or less) ranges.

"APC manufactures both transformer-based and transformerless UPS topologies, but we see an ongoing shift toward transformerless designs," said APC's John Collins, director of 3 Phase UPS Product Management. "Note that a transformer-based UPS should not be considered lower or higher reliability or lower or higher performance just because there happens to be a transformer in the UPS. Our advice to customers is, 'Don't worry about the topology.' Be clear with your potential vendors regarding your intended electrical performance, your application and your financial goal of total cost of ownership or lowest first cost."

Eaton-Powerware has gone transformerless across virtually the entire product line, up to and including systems of 1100 kVA (composed of multiple modules).

"We utilize a transformerless design primarily because it provides reductions in size, weight, audible noise and output impedance (better transient response)," said product manager Ed Spears. "Additional advantages include an improvement in UPS system efficiency of 1-4% and, of course, a lower BTUH rating. In our newer designs, the absence

Page 76: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

76

of the output transformer allows us to instantly (within 2 ms) transition our UPS from 'ready state' to full power-processing operation, since we do not need to magnetize a transformer. This is useful in our Energy Saver System and Variable Module Managements System, which improves efficiency significantly (2-10%) over previous and conventional designs."

According to Chuck Heller, product manager of Three Phase Power at Chloride, the company offers both types of UPSes but is transitioning to a transformerless product range. "We see a general shift toward transformerless UPS designs for data center applications, with the use of internal transformers being driven by specific application requirements," he said. "Interest in using 415/230 V distribution systems as a way to further improve overall data center efficiency is growing."

The rise of the transformerless UPS Besides efficiency, these two types of systems are significantly different in size, weight and cost. One of the driving forces behind this new breed of UPS is the exponentially increasing demand for overall power and power density in the data center. When power requirements were only 1-2 kW per rack, the UPS footprint was fairly small in relation to the total white space and did not require an inordinate amount of space within the overall data center envelope. As power requirements jumped to 5, 10, 20 kW or more per rack, the ratio of space required by the UPS changed significantly.

Not that many years ago, a 30-50 kVA UPS was ample for a typical small data center, but now a single rack of four blade servers can require 20-30 kVA, so that the "small" UPS is now 100-250 kVA for a small data center. To meet customer demand for more power that had to fit within limited space, manufacturers started adopting the transformerless design for larger UPS systems.

Also, by eliminating the transformer cost, the UPS price was significantly reduced --

always a market driver. Even for a data center that believes a transformer-based UPSes

may be more reliable, lower UPS costs may now allow it to budget for an N+1, a full 2N or

even a 2(N+1) modular redundant design. With this added redundancy, even if there

were a UPS failure, the other UPS (and/or power path) would be able to carry the load.

And since transformerless units are smaller, more efficient and cheaper, the data center

could better afford N+1 or 2N redundancy in a smaller site. This helped overcome the

reliability and availability issue, which makers of transformer-based systems claim as a

primary advantage.

The advantages of a transformer-based system So are transformer-based systems passé? Transformers are not without merit and, in fact, are inherently part of many power systems, whether they are contained in the UPS or located upstream or downstream from the UPS. One primary function of a transformer is to transform the voltage. In a typical power chain, they are sometimes external and used upstream at 13 kV (or higher on larger installations), stepped down to 480 V (or 208 V for some smaller systems) to feed the UPS. In North America, downstream from a 480 V UPS output (some Canadian systems use 600 V), they are

Page 77: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

77

required to step down to 208/120 V. Transformers can be incorporated in an adjacent cabinet or in external PDUs.

In Europe, the voltage and distribution scheme is based on 400/230 V (stated generically to include 380/220 V through 415/240 V). Transformers are used upstream of the UPS to convert the high-transmission voltage to 400/230 V. This somewhat changes the issues downstream, since virtually all the IT loads are single-phase 230 V (line to neutral). The UPS inputs and outputs 400/230 V, and there is no voltage conversion or transformer required. In this case, the inclusion of a transformer could play a potentially beneficial role by providing isolation and acting as a buffer for phase imbalance as well as fault current limitation via its impedance. Yet many European UPS manufacturers are transformerless or moving in that direction. The use of 400/230 V systems is being considered in the U.S., and will be discussed in part 2 of this series.

There are some instances where input and output transformers are necessary, such as for medical equipment, where total ground and neutral isolation and avoiding any leakage currents are required.

What are the advantages of transformers in the UPS in a typical data center application? They offer greater tolerance to phase imbalance from single-phase loads, which are very typical of most IT loads, both for 120 V to neutral and 208 V line-to-line. This is especially true for a 208/208 V transformerless UPS system where the inverter output goes directly to the power distribution system. In that case, a transformer (either internal to the UPS or in a separate PDU cabinet) will partially mitigate phase imbalance and prevent the UPS from overloading from an imbalance.

It is important to note that in a typical 208 V transformerless UPS system, there are essentially three inverters tied to a common neutral, each facing the load (L1, L2, L3 + Neutral + Ground). In a typical small to mid-size installation, the unbalanced loads may sometimes look like this:

L1=68% L2=90% L3=52%

Overall, the UPS is only delivering approximately 70% of its total rating; however, the L2 circuit is at 90% and only has 10% of headroom before reaching its maximum, and the UPS would report an overload if L2 experiences any additional load. While a transformer would not be happy with this imbalance, it would normally tolerate this L2 imbalance better. The load presented to the UPS by the primary side of the transformer would be very close to 70% and balanced across all three phases. Obviously this is a somewhat extreme example, and a well-managed power distribution system would not normally allow this level of imbalance. However, when installing equipment, IT personnel are prone to using whichever circuits are available and may not have any branch-level power metering, a very common scenario.

Page 78: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

78

A transformer's inherent impedance can also limit the instantaneous over-current from a circuit fault. This is especially true for an all-208 V transformerless power distribution system. However, a typical mid-size or larger data center commonly uses a 480 V UPS, and the inverter output will wind up facing an external stepdown transformer, normally located in one or more PDU units.

The transformer also helps mitigate harmonics caused by a less-than-perfect sine wave from the inverter output and also by the non-linear loads cause by the IT equipment's switching power supply. However, if the PDU has a transformer, as in the case of a 480 V UPS, it mostly negates the argument of the benefit that an internal (inverter) transformer would offer.

In some cases, an inverter transformer-based system provides somewhat better neutral isolation from poor-quality utility power to the IT load; however, this does not protect the UPS input. In cases of extremely poor-quality mains power, the addition of an input transformer will help to limit the energy intensity of some spikes and surges to the UPS; however, if the utility power quality is that poor, dedicated power conditioning (which usually contains inductors) and/or Transient Voltage Surge Suppression is recommended -- although if the power quality is that poor, you may want to rethink placing a high-value data center in that location. So the logic of an inverter transformer is rendered as somewhat of a moot point.

The market will decide Either type of modern UPS with an IGBT-based input section can also control and shape its input power factor to approach unity (typically over 0.95) over a much broader range of loads. This lowers the strain and improves the efficiency of the upstream power path and especially the back-up generator, which no longer needs to be significantly oversized to support the UPS.

The main input voltage may also influence the decision of UPS type. In the lower power arena (i.e., 100 KVA and under), 208 V input systems are fairly common. The choice is then dictated by the available utility power and the size of the installation. The transformerless UPS is solidly entrenched with the majority of sales in the below-100 KVA market and holds approximately 50% of the 100-250 KVA space for new units.

In the end, vendors will always move toward what customers demand. As long as customers or their engineers want to use a transformer-based UPS, vendors will build them. As time goes on and the transformerless UPS eventually establishes a record of reliability, the market will decide which will be become the favored choice in the mid-market space.

Like the early light beer commercials (real beer versus light beer -- i.e., "great taste, less filling"), the debate over the transformer-based versus transformerless UPS sometimes becomes a matter of political or religious beliefs. Ultimately, however, unless there proves to be a rash of failures of transformerless UPSes, they will continue to increase their market penetration into the larger spaces.

Page 79: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

79

How to Choose the Right Uninterruptible Power Supply for Your Data Center

By: Bob McFarlane

When it comes to buying the right uninterruptible power supply (UPS) for your data

center, size matters. This tip explains UPS sizing and capacity planning.

Child 1: "My dad's UPS is bigger than your dad's!"

Child 2: "But my dad's has more kVA per kilowatt!"

This imaginary children's banter is indicative of the confusion that has long surrounded UPS ratings. And it has become so common to oversize a UPS that "bigger is better" has usually been taken for granted. So how big should your UPS be, and what do those ratings really mean? I'll explain why Child 1's father may be wasting energy, and why Child 2's brag is actually a negative. If you've ever felt confused or misled by UPS ratings, you're not alone.

First we need to understand terminology.

Volts (V) x amps (A) = volt-amperes, or VA. (We'll get to watts momentarily.)

So 480 V x 250 A = 120,000 VA.

That's a big number, so we divide by 1,000 and get 120 kilovolt-amperes, or 120 kVA.

In my article on calculating the data center power load, I said that with alternating current (AC), VA does not equal watts, but I didn't say why. I also said we could probably ignore the error with today's servers. But for UPS ratings, the difference does matter. Let's discuss the reason.

For AC power, the complete formula is as follows:

Watts = volts x amperes x power factor, or W = V x A x pf.

Power factor is defined as the ratio between "real power" and "apparent power," but this is not an engineering article, so that's all we're going to say on that subject. Watts is the real power and volt-amperes is the apparent power, so VA is obviously something mysterious. But it's the watts figure that's important for today's data centers, so we can let the mystery be.

We do need to understand that this thing called power factor is rarely 1.0, except for incandescent light bulbs, heaters and toasters. It's usually less than 1.0, and never more, so watts are generally less than volt-amperes. Now let's look back at those servers, which today have power factors between 0.95 and 0.99.

Page 80: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

80

120 V x 3.0 A = 360 VA x 0.95 pf = 342 W

That's a small difference between VA and watts. With better power factors it's even less, which is why we said the error doesn't matter much unless you have a lot of hardware.

Most UPS systems, on the other hand, are sold based on kVA ratings, but for years have been designed with power factors of 0.8. So a 100 kVA UPS with a 0.8 pf can deliver only 80 kW of real power. If you believed they were the same, you'd eventually find you had a 20,000 W shortfall. That's one reason many people have been surprised when their UPS said "98% capacity" but they were nowhere near the kVA rating they bought. Rule: If your UPS power factor is less than your computer hardware power factor, your actual UPS capacity will be its kW rating, not its kVA rating.

Since server power factors have gotten better, many UPSes are now designed with a 0.9 power factor, so a 100 kVA UPS will have 90 kW of capacity. And at least one manufacturer designs for unity, or 1.0 power factor, meaning that the kW and kVA ratings are the same. (With such a UPS, the load limit will be kVA, not kW, because your computer equipment is not perfect. In other words, a 100 kW/100 kVA UPS will probably max out at around 95 kW.) We won't discuss small UPSes that often have power factors around 0.7 -- they're specified in watts, so you will know.

How to size your uninterruptible power supply When we know the kilowatt and kVA ratings, we can size our UPS. We previously showed how to estimate real load watts and explained why data center power is so often figured 40% to 60% high. Now we'll show how to "right size" the UPS. Start with the real estimated Day One data center load in kilowatts, then add some headroom. A good rule of thumb number is 125% (which is 80% loading). Then pick the next highest standard-size UPS. That provides some growth, as well as capacity for installing parallel systems during an upgrade.

That's good for a while, but it doesn't cover the long term. We also need to grow to the ultimate load we calculated, but we don't want to over-size the system in anticipation. At low loads, UPSes waste more of their power in heat, and are generally most efficient when running close to their rated capacities. Efficiencies vary widely, but many double conversion UPSes are 90-95% at 80-100% load, and then they go down. There are high-efficiency systems available today that go up to 98%, often using different technologies than we're used to, so to get real efficiency we may need to think about things differently than we have been. But let's look at more conventional system efficiencies as an industry norm.

89% at 50% load; 88% at 40% load; 86% at 30% load; 82% at 20% load.

That's lost energy, 24/7/365, which takes more power to cool.

Page 81: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

81

A good consideration today is one of the modular or incrementally enabled systems. They let you plan for your maximum growth, but provide only the actual capacity you initially need. Modular systems let you plug in UPS capacity as you need it. Incrementally enabled systems provide the same end result, but are shipped with the extra capacity already installed, but disabled. It's activated via software or firmware when you're ready. These systems all grow differently, but the principles are the same -- add capacity and pay for it when you need it. Of course, there's an up-front premium for this flexibility, but it avoids the full initial capital expense while also saving energy, which probably means a good return on investment. Let's look at why.

A 1% efficiency loss on a 100 kW UPS is 1,000 W or 24 kilowatt-hours (kWh) every day of every year.

1% x 100 kW = 8,760 kWh/year = $876 @ $0.10 and $1,226 @ $0.14 per kWh 1% x 500 kW = 43,800 kWh/year = $4,380 @ $0.10 and $6,132 @ $0.14 per kWh 1% x 1,000 kW = 87,600 kWh/year = $8,760 @ $0.10 and $12,264 @ $0.14 per

kWh

A 5% efficiency loss is more dramatic, which is why right-sizing is even more important for redundant UPSes. Let's assume a 100 kW UPS and look at efficiency with N+1 and 2N redundancy at two different module or increment sizes.

With 50 kW modules, N+1 is actually a 150 kW system with 100 kW of usable capacity. (If any one of the three modules fails, the other two still maintain the system.) Eighty percent of 100 kW is 80 kW, which is only 53% of the actual 150 kW capacity. We've dropped less than 1% in efficiency -- not too bad, except that systems often run way below this level.

With 10 kW modules, N+1 is a 110 kW system, still with 100 kW of usable capacity. (If any one of the 11 modules fails, the other 10 still maintain the system.) Eighty percent of 100 kW is 80 kW, which is 73% of the actual 110 kW capacity. We're back in the maximum efficiency range, even at lower usage levels.

2N is just two 100 kW systems, regardless of module size or configuration, each running only half the 80 kW load. (If either system fails, the other picks up the total load.) Forty kilowatts is only 40% of design capacity, which is getting into the low-efficiency range, and below this the losses increase rapidly. This is why green design means carefully considering the level of redundancy we really need.

So to properly size a UPS, you must do the following:

Make realistic load estimates; provide reasonable head room and short-term growth; know both the kilowatt and kVA ratings; buy with the capability of growing to long-term capacity but activate only what

you need; think carefully about the level of redundancy you really require;

Page 82: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

82

check the efficiencies at the load level you will be running; and look carefully at the various UPS options on the market today.

You may be surprised by the difference a careful choice can make.

Page 83: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

83

Using Flywheel Power for Data Center Uninterruptible Power Supply Backup

By: Christopher M. Johnston

To protect against downtime during a power outage, data centers sometimes use

flywheels as a source of backup energy for uninterruptible power supply (UPS) systems

instead of traditional storage batteries. But before you opt for flywheels for backup

energy, you should understand how they differ from storage batteries. Below is a list of

prime differences to consider when deciding if flywheel power is right for your

organization.

1. Flywheels typically supply full-load energy for much shorter periods of time. 2. Flywheels require less space and weigh less. 3. Flywheel air-conditioning and ventilation requirements are less restrictive. 4. Flywheels consume slightly more power. 5. Flywheels require less maintenance. 6. Flywheels have a potentially longer service life. 7. Flywheels do not require storage battery spill prevention and recycling at the end

of service life. 8. Flywheels are not limited by the number of discharge cycles they can supply. 9. Flywheels are limited in the frequency of discharge cycles they can supply.

Flywheels vs. traditional storage batteries: Key considerations The UPS reserve energy source must support the UPS output load, while UPS input power is unavailable or substandard. This situation normally occurs after the electrical utility has failed and before the standby power system is online. As you determine whether flywheels are appropriate for a project, the amount of time that the reserve energy must supply the UPS output is key. For comparable installed cost, a flywheel will provide about 15 seconds of reserve energy at full UPS output load, while a storage battery will provide at least 10 minutes. Given 15 seconds of flywheel reserve energy, the UPS capacity must be limited to what one standby generator can supply. In 15 seconds, the standby power system must complete the following tasks:

It must recognize the utility loss. It must wait for any utility automatic transfer switch or re-closer to try to restore

utility power. If the utility power is not restored, it must crank the generator. It must transfer the UPS to the generator when its voltage is adequate.

With less than 15 seconds of reserve energy, there is not enough time to reliably parallel two or more standby generators. You can arrange switching to provide generator redundancy, but at the end of the day the UPS system capacity is limited by the capacity of a single generator. A 3,000-kW, standby-rated generator (the largest capacity readily available in the U.S.) limits the maximum UPS system capacity to about 2,200 kW. I view this limit as the major consideration.

Page 84: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

84

How flywheel backup power differs from other methods Space requirements. Assuming 2 feet of side clearance, 2 feet of rear clearance and 3.5 feet of front clearance, a flywheel for a 675 kW UPS module requires about 121 square feet of floor area and weighs about 9,400 pounds. A vented wet cell storage battery on two-tier racks for the same UPS module requires about 350 square feet of floor area, or almost three times the area required by the flywheel, and weighs 33,000 pounds (over three times that of the flywheel). A valve-regulated lead acid (VRLA) storage battery in cabinets for the same UPS module will require about 250 square feet of floor area (twice that required by the flywheel) and weigh 35,000 pounds (almost four times that of the flywheel).

Ventilation. Flywheels require the same wide operating temperature range as do UPS equipment (32 degrees Fahrenheit to 104 degrees Fahrenheit), while storage batteries should be maintained at 77 degrees Fahrenheit for rated performance. Storage batteries also require ventilation to prevent hydrogen accumulation, and hydrogen detectors to alarm on accumulation. Therefore, flywheels can be installed in the same room as UPS equipment, while storage batteries should be installed in separate battery rooms.

Energy consumption. Flywheels consume more energy than storage batteries. For a flywheel supporting a 675 kW UPS module, one manufacturer advertises 1.5 kW losses (0.2% of module output), while another advertises 6.6 kW losses (1% of module output) when the flywheel is fully charged and online. In my experience, the float charging of a storage battery supporting the same 675 kW UPS module requires about 0.3 kW (0.04% of module output). Assuming 95% UPS rectifier efficiency, a site power usage effectiveness (PUE) of 1.7 and 10 cents per kWh cost of purchased electricity, the flywheel described above would cost $1,900 to $9,900 more per year to operate than a storage battery.

Maintenance requirements and service life. A reliable storage battery requires more maintenance than a flywheel. At minimum, vented wet cells need maintenance twice a year – a 6-month checkup and a more in-depth maintenance check six months later. In addition, they require interim quarterly maintenance when batteries are new and more frequent electrolyte replacement. VRLA cells also need maintenance every six months. One flywheel manufacturer recommends annual maintenance and a change of bearings every three years, while another recommends annual maintenance and a capacitor change every six years.

Modern flywheel manufacturers advertise a 20-year service life, which is based on their own forecasts. But none of the products in use today have been in service for more than 11 years. I expect that they will reach the 20-year service life that corresponds to today's UPS products. In my experience, storage batteries have a shorter service life than their warranties imply; vented wet cells rarely exceed a 14-year service life, despite a 20- year warranty, while low-cost VRLA cells rarely exceed a four-year service, despite a 10-year warranty. It is reasonable to assume that the storage battery must be replaced during a UPS system's service life, while a flywheel will not.

Hazardous materials. All commonly used storage batteries contain an acidic electrolyte and lead, hazardous substances that are governmentally regulated. Spill prevention is

Page 85: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

85

required on most sites with storage batteries, and recycling of storage batteries is mandatory. Flywheels do not contain large amounts of hazardous materials and are not governmentally regulated.

Discharge cycle limits. There are limits on the number of discharge cycles that a storage battery can supply during its service life. One manufacturer's standard vented wet cell is rated for 2,700 discharges of 30 seconds or less, with a premium cost model rated for 10,500 such discharges. Discharge data on VRLA cells is less available, although one manufacturer states that its product is rated for 100 discharges of 15 minutes. Flywheel manufacturers state no limit on the number of discharges they can supply during service life. As a practical limit, I assume that the limit is something less than 20 per hour, since one manufacturer states that its product requires 2.5 minutes to recharge after a discharge. These discharge cycle limits become relevant only where the utility supply is very unreliable.

Let's assume a desired 12-year service life for a vented wet cell that is rated for 2,700 discharges; this is an average of 18 discharges per month, or one discharge every 38 hours. If the utility is less reliable than the calculated average, expect that the battery service life will be less than desired. If you assumed the premium-cost battery and a 14-year service life, then the average would be 63 discharges per month, or one discharge every 12 hours.

However, there are limits on the number of discharge cycles that a flywheel can supply during a short time period. A typical flywheel requires about 2.5 minutes of recharge time after a discharge. Like a storage battery, flywheel recharge is usually accomplished when the UPS is supplied by the utility so as to minimize the standby power requirement. If another discharge is needed before recharge is completed, the flywheel may not be adequate. Discharge frequency should be considered. Let's assume that you have a site with a UPS storage battery and the utility supply is less reliable than advertised, but most of the utility outages are less than 15 seconds. Your storage battery's service life is being rapidly reduced. Under either scenario, the standby power plant will get a lot of exercise and burn a lot of fuel, driving up operating costs and potentially exceeding air quality permit limits. Assuming that the standby power plant operates for two hours after every start, 18 battery discharges per month becomes 430 hours per year of standby power plant run time. I would not recommend that a client consider a site with a utility supply that is this unreliable.

I will now partially contradict the previous sentence. Let's assume that you have a site with a UPS storage battery and the utility supply is less reliable than advertised, but most of the utility outages are less than 15 seconds. Your storage battery's service life reduces rapidly. You could remedy this problem by installing a flywheel in parallel with the storage battery. The flywheel controls can be arranged so that flywheel will supply the UPS until its stored energy is exhausted (15 seconds), at which time the storage batteries supply the UPS. You could also delay the start of the standby power plant to coincide with when the flywheel is exhausted. This remedy can enable you to prolong the storage battery life and reduce the number of standby power plant operations.

Page 86: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

86

In this article, we've covered the key differences between flywheels and traditional storage batteries. Now you can make an educated decision about whether flywheels for data center backup power are an option for your data center.

Page 87: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

87

Data Center Efficiency Metrics and Measurements

Green Grid Hones PUE Data Center Efficiency Metric

By: Mark Fontecchio

The Green Grid recently held its third annual Technical Forum in San Jose, Calif. The

nonprofit energy-efficiency group will cover topics such as efficiency metrics, free cooling

and reusing waste heat. SearchDataCenter.com talked with Christian Belady and Jon

Haas the forum. Belady is a Green Grid board member and treasurer and a principal

infrastructure architect at Microsoft. Haas is the vice chair of the Green Grid's technical

committee and director of eco-technologies at Intel Corp.

What are the main themes of the technical forum this year?

Christian Belady: We'll be talking about three main things. First, calculators and tools.

There will be discussions and presentations around the power usage effectiveness [PUE]

tool and an efficiency estimator. On the education front, we're kicking off a new course

with our academy. And then alliances. What you've seen is there is much more around

collaboration across organizations than a year ago. Among those is with ASHRAE and

Data Center Pulse.

The Green Grid will present a PUE calculator. The U.S. Department of Energy has its DC

Pro tool, and I your show agenda also features a power efficiency estimator. Is that too

many tools?

Jon Haas: I don't think there's ever a time when there are too many tools, though it can

be confusing to pick which one to use. The DOE's is a high-level tool to give you places to

start looking for savings. The power efficiency estimator is really to look at power

topologies and allows you to do comparisons across different topologies and

architectures so you make the right decisions.

Has the Green Grid made progress in developing a unified productivity metric [which

measures overall productivity of a data center, not just its energy efficiency]? Is it

possible?

Belady: If you look at the year forward, this is a challenge for the industry as a whole. Is it

really just one metric, or is it multiple metrics? I think it's all up in the air. Part of the

reason for this forum is to get this dialogue going. That is our challenge going forward: to

drive a common set of productivity metrics or a metric.

What are partial PUEs?

Belady: The thought is that the industry is changing. Some vendors are taking fans out of

Page 88: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

88

servers. Some are selling container data centers. A containerized data center might have

a data center PUE, and a container PUE. You can start dissecting the total PUE into these

smaller pieces.

There will be talk of unused servers at the forum. Are data center operators ready to

power down unused servers?

Belady: I think this is an issue that everyone is working toward, and ultimately the most

sophisticated users are looking at it. It has a lot to do with asset management -- who is

using what when -- and a good decommissioning process. The point of this study is there

is huge savings if you can find a mechanism to actually shut down servers that are

obsolete or beyond their useful life.

There will be a session on re-using waste heat?

Haas: We're going to address that with what we call a reuse factor for waste heat to

allow it to be accounted for. It won't affect the PUE score unless you're somehow re-

using the waste heat in the data center.

Page 89: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

89

Measuring Data Center Energy Consumption in Watts per Logical Image

By: Lucian Lipinsky de Orlov

"You can't manage what you don't measure." This is especially true when it comes to

energy consumption. In automobiles, miles per gallon (MPG) very clearly identifies fuel

efficiency. In the data center environment, there are metrics that can be adopted to

measure computational power efficiency in a similar way that MPG measures automotive

fuel efficiency. The Green Grid has proposed the use of power usage effectiveness (PUE)

and data center efficiency (DCE), or data center infrastructure efficiency (DCIE), to

measure data center energy efficiency. But concerns about the effectiveness of PUE and

DCE as true measures of efficiency highlight the need for a better metric.

PUE is a ratio of the total power a data center facility consumes versus how much of that power is consumed by IT equipment. An average data center should measure 2.0, meaning that only half of all the power consumed is used by non-IT equipment. A data center that measures greater than 2.0 needs to investigate its power inefficiencies. The Environmental Protection Agency recently stated that it will use PUE as an Energy Star rating for data centers. The EPA's endorsement should rapidly expand adoption of PUE.

But a data center manager could "game" the PUE metric by running inefficient servers and other IT equipment to raise the denominator. This is a major problem with using PUE and DCE as a measurements of effectiveness and efficiency. They ignore how efficiently the IT equipment components are being operated.

Therefore, it is necessary that we find a new metric -- one that is similar to the simplicity and elegance of MPG. What is needed is a simple and true measure of server efficiency, just as MPG can be used to compare cars. Watts per logical image (WPLI) is just such a metric.

Watts per logical image Similar to MPG, WPLI measures performance on a granular, or per-unit, level. By looking at the efficiency of each server, focus can be placed on managing inefficiency, just as MPG identifies the need to replace gas-guzzling SUVs and trucks with light, more fuel-efficient vehicles.

For example, a popular dual quad-core blade server chassis operating at 3,535 W and running a single, preinstalled operating system has a WPLI of 3,535. Yet running 24 -32 virtual machines (3-4 production systems on each core, a realistic performance configuration for a virtual-to-host ratio) on the embedded hypervisor would result in a WPLI of 147 to 110.

Basically, in one scenario there is a server running at 3,535 W, and in the other scenario, the virtual servers run as efficiently as 110 W each. This example shows a power reduction of almost 98% per server.

Page 90: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

90

This is why you need to measure power consumption at the logical image level, and this is why businesses must aggressively migrate to virtualized platforms.

WPLI equals the total individual server power consumption (plus related prorated chassis power requirements) divided by the number of operating system images running on that hardware. A composite WPLI can also be tracked by dividing all the power consumed by a data center's installed servers by the total number of operating system images running in a data center. This, however, only provides an average value and is not granular enough to make informed design decisions. It is a start and helps demonstrate that the IT organization is on the right track and is supporting overall enterprise sustainability activities of the corporation.

With all this information, a data center manager can now target the power-hungry environments and, where appropriate, the applications running on them for virtualization.

The blade and rack enclosure efficiency should also be considered when holistically architecting a more power-efficient data center. Not all racks or blade chassis handle power and cooling as efficiently and should be analyzed, and their placement and performance in the data center modeled and studied.

As we approach 2012, the date which IDC estimates that power costs will equal server capital costs, managing power consumption in the data center will become ever more critical.

By knowing the power consumption of a logical image, chargeback can accurately include the direct power consumption an application uses. This means that IT chargebacks can incorporate rising energy bills and start to increase cost transparency. An easy way to justify migrating to virtualization will be when the lines of business push IT to migrate its power-hungry, traditionally hosted applications to those with lower WPLI.

Using these metrics, data center managers will be able to craft more efficient business cases and financial justifications to help gain approval and accelerate capital acquisition requests.

Very few people purchase cars these days without considering MPG performance. No server purchase should be made without an understanding of the expected watts per logical image that the box will ultimately deliver.

Page 91: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

91

In Measuring Data Center Power Use, More (info) is More

By: Mark Fontecchio

Today, data center managers find that they can't improve what they can't see, and so they are looking to measure power downstream from a data center's utility meter.

According to SearchDataCenter.com's Data Center Decisions purchasing intentions survey, 84% of 670 respondents said that reducing data center power consumption was important. And yet according to the same report, 36% didn't know how their power bills compared with the previous year.

This lack of awareness, however, has started to change.

James Rohan, a mechanical engineer at Penn State University, said the school's data centers don't yet measure power consumption downstream from the utility meter, but now the school is installing a new electrical service that will allow such measurements.

The difference between power levels at the utility and downstream can be taken up by leakage, such as when power conversions are done in equipment, or in the energy it takes to cool the facility.

Large data center operators like Microsoft and Google have already said that measuring power consumption is key to energy efficiency. Last year, Microsoft demonstrated how it measured power downstream from the utility using a homegrown tool called Scry. It includes sensors around the data center that tie into the company's configuration management database (CMDB) and asset management software to give data center staff details on energy use and carbon footprint. The product was so detailed that it could give information on the power consumption of a specific Microsoft external application such as Hotmail.

Page 92: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

92

Measuring Energy Leakage: Catching Up with the Colos

But most data center operators aren't at this level yet. Duffield, Va.-based colocation company OnePartner currently has two power strips for each of the server cabinets it rents out. The strips provide power readouts without opening the cabinet door. But OnePartner includes power consumption with the cabinet cost -- where the power bill is split among all its customers -- and so it doesn't monitor energy use continuously.

But now some customers have asked for that capability because they want to understand the power envelope for their company servers. So OnePartner will buy so-called intelligent power strips or power distribution units (PDUs), which can provide energy use at the rack or outlet level and report the measure back to connected software.

"It would mean swapping in the smart strips," said Tom Deaderick, the director of OnePartner. "I'm actually considering using it as our default package. That way, customers wouldn't pay more or have any switchover if they wanted to monitor power later on."

Some big colocation companies and cloud computing providers such as Microsoft, Google and Amazon have also been measuring power consumption downstream from the meter so that they can report their power usage effectiveness. PUE is a data center efficiency metric that compares the power coming into the data center facility with the power that the IT equipment consumes. The closer those two are to each other, the more efficient the facility.

But companies like OnePartner are mostly interested in giving customers the option to view their power consumption. And for others, it's just a matter of being able to keep track of what's in the facility.

Better energy metrics give IT a heads up on added/subtracted gear Lance Kekel, the manager of data center operations at a large Midwest retailer, wants a power monitoring program that will alert him when IT staff adds or removes equipment.

"We have a change management process," he said. "Things aren't supposed to go into the racks or come out without my knowledge." But it happens. Right now, the only place Kekel's shop can measure power is at the uninterruptible power supplies (UPSes). Since there is no software that ties in, staff can only check it periodically and then calculate a rough average.

Kekel is working on it. His company bought some PDUs that can measure energy consumption at the rack or outlet level. Eleven of 13 server racks have been fitted with the new PDUs, and Kekel hopes to soon buy the accompanying management module from Hewlett-Packard Co. that will allow him to drill down in more detail.

"This would give me another tool to determine whether something goes in or out," he said.

Page 93: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

93

Using Chargeback to Reduce Data Center Power Consumption: Five Steps

By: Lucian Lipinsky de Orlov

There's a not-so-hidden and poorly controlled expense in the data center these days --

power. It certainly receives a great deal of press, especially when the conversation turns

to green IT. Unfortunately, it's mostly talk, and very little action. This is a significant

problem because in many situations, the cost of power (including cooling) may be the

largest expense in the data center.

In general, consumption is only limited or restrained when it is associated with a factor such as cost or when it is measured and managed. Recent rises and declines in the per-gallon price of automotive fuel is a great example of this. The best way to limit a business consumable is to measure its use and compensate an employee on a related, objective metric.

Power is uniquely measured and managed. The data center consumes it, facilities provide it and finance pays for it. There are very few organizations that stray from this operational model and, not ironically, these three groups are not typically known for their close interaction with one another. The problem is that the three groups are generally unaware of the needs of the others and if they share the same business goals or impacts. The result is disorganized management of enterprise power consumption. (Missing from these groups is the business user, for whom this power-consuming activity and cost takes place to begin with.

The key objective is to include power as an IT service delivery cost component paid by the business user. This is challenging in and of itself when most organizations are struggling just to get hardware costs allocated. Part of the challenge is the limited breadth of most chargeback tools and the lack of reporting of actual power consumption.

Rising data center power costs: A problem with no end in sight IDC estimates that power costs will equal server capital costs by 2012. Some companies have likely already reached this threshold. The high, and rising, cost of power means that as assets extend beyond their typical book value, power becomes the largest related expense.

This is a very critical issue. In 2007, IDC stated that 30% of the cost of a data center is spent on power, and that figure has doubled in the last five years. Looking five years ahead, worldwide data center power and cooling costs will grow at a rate four times that of new server spending -- while Microsoft projects the rate of power growth to be eight times.

And many data centers are falling short in addressing this issue with data- and fact-based solutions. Basic economics demonstrate that this current rate of spending and growth must be stopped, as organizations are throwing money out the window.

Page 94: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

94

Measure consumption for better power management With power quickly exceeding capital costs, one would think that chargeback tools would make this a key function. But this is clearly not the case. The savvy data center manager will install power-monitoring tools and equipment to be able to start making fact-based decisions and passing power costs onto the business user.

In its report to Congress, the EPA identifies data center power metering and chargeback systems as one of the Policy Recommendations and Strategies to reducing data center power.

To help measure server power consumption and efficiency, a defined metric needs to be established. This metric should be an MPG-like measurement that, like MPG, can be easily tracked. An effective metric for measuring server efficiency is "watts per logical image," which gives a solid, trackable metric around the power required to deliver an operating system environment.

One of the standard measurements data center managers should use is power usage effectiveness (PUE). This metrics is a ratio of total data center power consumption versus total server power consumption. In its report to Congress, the Environmental Protection Agency (EPA) found an average PUE of 2.0. In other words, for every watt of power consumed, another watt is required to support the surrounding infrastructure. This is unacceptable. Google, in a recent IEEE Spectrum story, claims to have a data center with a PUE of 1.15.

Without setting consumption and efficiency ratios, IT managers cannot establish credibility with the business user by demonstrating power reduction achievements to date.

While there aren't any silver bullets to solve the power problem -- or green ones for that matter -- there are a number of simple and logical steps that organizations can take today.

Power chargeback: Steps to success

Step 1a: Implement some type of chargeback process supported by data collection tools, even if it is just a single technology, such as virtualization. Part of the chargeback challenge is that there aren't products that span all data center categories and operating environments. From this point forward, expanding the processes and tools to cover as much of the data center operations should be established as part of an ongoing initiative. A best practice is to establish a program management office (PMO) to focus on delivering a comprehensive chargeback solution to enterprise.

Step 1b: Initiate conversations and interaction among data center management, facilities management, and finance. Create an energy governance board with participation from the three groups so they can better understand power requirements from an enterprise point of view. Calculate the energy costs for IT service delivery and start to allocate it among business users. This will be a

Page 95: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

95

challenge because in most cases this will at least double the IT charges where power was not previously included. Determine a power consumption baseline and start tracking improvements in monthly reduction. .

Step 2: Identify and roll out appropriate power monitoring tools to more accurately identify power consumption for specific data center products and components. Set performance targets so data center and facilities management are jointly accountable for energy consumption. Closely align business user billing with actual consumption. Expand technical usage data to additional data center components.

Step 3: Collect, assimilate and consolidate disparate data sources and elements into a single database. Integrate procurement data (including utility costs) with equipment usage (including power consumption) information and determine actual per unit (varying by item) cost. Adjust cost levels with all chargeback tools. In most cases, moving from projected or estimated costs to actual costs will reduce the rates charged to the business users.

Step 4: Transition from component-based chargebacks to business services-based billing. The result is a charge that resonates with the business consumer. Experience shows that business users understand and are clear on what it means to pay for a specific business activity (such as performing a mark-to-market calculation on a portfolio in the financial services industry) rather than presenting a bill for CPU, network, storage, power and other technical components for the same business processing.

Step 5: At this point, data collection should be in place and accountability should be established so that consumers of IT services are requesting – or demanding -- improved energy efficiency, which will lower service delivery costs. An even better position is for the data center and for facilities to proactively improve power efficiency.

Of course there are also some "hidden benefits" of including power in chargeback calculations. Many organizations face objections to moving production systems to virtualization platforms. Such moves help defer new data center construction and dramatically reduce asset and operational costs. By charging the business user for the actual costs of hosting applications on dedicated platforms (by including power) the migration to virtual servers can become user-driven. The need to reduce power pass-through means that data center managers need to reduce waste through efficiency improvements. In many cases, these activities could result in deferring or mitigating the need for considerable capital expenses.

The choice to measure, monitor and track data center power consumption has passed. Because it is one of the most significant data center expenses, business users must be charged for power. Data center management, facilities management and financial needs must align and be measured on common objectives around power usage and consumption reduction. The best way to drive to this end is to include business users by charging for power in the chargeback process.

Page 96: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

96

The TPC Energy Specification: Energy Consumption vs. Performance and Costs

By: Mike Nikolaiev

Since 1988, the Transaction Processing Performance Council (TPC) has created

benchmarks to measure the performance of full systems that execute transaction

processing. The organization has developed nine benchmarks, each addressing distinct

requirements of IT industry demands. The TPC Energy Specification is the next significant

development in TPC's work. The TPC Energy Specification is poised to become an

essential tool for IT stakeholders to compare, choose and improve technologies, and is

currently in the final stages of development.

Performance and price/performance metrics are key criteria in data center purchasing decisions,

but the demands of today's corporate IT environment also include energy consumption as one of the most important considerations. Energy efficiency has become one of the most significant factors in evaluating computing hardware. To address this shift of IT purchasers' priorities, TPC is developing a new Energy Specification to enhance its widely used benchmark standards. The addition of energy consumption metrics to the TPC's current arsenal of price/performance and performance benchmarks will help buyers identify the energy efficiency of computing systems to meet their computational and budgetary requirements.

Demand for a new energy metric The unprecedented growth in reliance on computers (and Internet) to run the world's industries and governments has led to an explosion in server installations, both in size and number, as well as the amount of energy required to operate and cool them. Energy consumption has increased exponentially over recent years -- a trend that will continue to accelerate into the future. This is evidenced by the Environmental Protection

Agency's report, "EPA Report to Congress on Server and Data Center Energy Efficiency," in which data center energy consumption within the U.S. is projected to surpass 100 billion kWh by 2011, with an annual electricity cost of $7.4 billion.

The requirement to reduce energy costs and usage and satisfy the mounting demand for additional computing resources has become the greatest challenge for many IT organizations. Data center growth is constrained by hard limits on energy consumption due to facility issues, limitations of the power grid and/or policy decisions. Public awareness of data center energy consumption and its impact on the environment has influenced many companies to place a higher priority on choosing "greener" technologies to "do their part" to protect the environment.

Solving the data center energy consumption dilemma What dilemma? Well, for one, doubling or tripling the computing performance without

Page 97: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

97

increasing energy consumption. This is a common theme heard in corporations around the world -- "do more with less." More computing power with less energy consumption, less heat generated and less cost! The three most important criteria are addressed in this dilemma: computing performance, energy consumption and price. Each of these forces pulls in different directions. As forces interact in physics, so too they interact in this dilemma. These forces are causing IT decision makers to deploy technologies that meet these new demands.

How will TPC energy benchmarks help? Having a standalone measure of energy consumption without also incorporating performance and price is like providing only the miles-per-gallon rating of a vehicle. Without knowing the speed, size or price of the vehicle, it is unlikely that a good business decision can be made about its purchase. Additional data must be provided before a well-informed purchase can be made. Does the vehicle (computing system) provide the performance required? Will the vehicle (computing system) allow the users to meet his requirements? Is the price versus these other requirements acceptable? The TPC Energy metrics will provide the additional dimension to the computing systems' performance and price. As in the TPC's price/performance metrics, which rank computing systems according to their cost per performance rates (e.g., TPC-C $/tpmC), the TPC Energy metrics will rank systems according to their energy consumption per performance rates. This will be in the form of watts/performance rate (e.g., watts/KtpmC). The ranking of the Top Ten energy/performance systems will be available on the TPC website.

The three most important criteria in IT purchases are performance, price and energy consumption. But today's complex IT environment demands that price and energy consumption be put in perspective of performance. Reducing costs or energy consumption at the expense of performance is often unacceptable. The TPC metrics of price/performance and energy/performance addresses this concern. Customers increasingly require these metrics to be provided for IT purchasing decisions.

Buyers now demand an objective method of comparing all three factors to select equipment that best meets their changing requirements, and the TPC's Energy Specification is being carefully designed to address this need. Like the TPC Pricing

Page 98: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

98

Specification, the TPC Energy Specification is a supplement to existing TPC benchmark standards, rather than a standalone measurement framework. This means that it is intended to be compatible with TPC benchmark standards currently in use, including TPC-App, TPC-C, TPC-E and TPC-H. The result will be metrics that enable comparison of systems on all three axes -- price, performance and energy consumption.

Page 99: Data Center Energy Efficiency Guidedocs.media.bitpipe.com/io_25x/io...EnergyEfficiency... · Data center managers have battled a growing power bill for the past several years, but

RESOURCES FROM OUR SPONSOR

See ad page 3

• Power Monitoring for Modern Data Centers

• Switchgear Design Impacts the Reliability of Backup Power Systems

• Maintaining the long term reliability of critical power systems

About Schneider Electric:Schneider Electric delivers engineered solutions designed to increase safety, lower life cyclecost and maximize power system reliability. Whether you require a new data center installation,refurbishment, replacement, or recommendations for optimizing existing equipment, ournationwide network of qualified experts provide the expertise and accessibility necessary todeliver a complete solution specific to your needs.