26
03/18/09 Green Data Center Program Alan Crosswell

03/18/09 Green Data Center Program Alan Crosswell

  • View
    215

  • Download
    1

Embed Size (px)

Citation preview

Page 1: 03/18/09 Green Data Center Program Alan Crosswell

03/18/09

Green Data Center Program

Alan Crosswell

Page 2: 03/18/09 Green Data Center Program Alan Crosswell

03/18/092

Agenda

• The CUIT Data Center• Other Data Centers around Columbia• Current and Developing Data Center Best Practices• Future State Goals• Our NYSERDA Advanced Concepts Datacenter proposal• Next Steps

Page 3: 03/18/09 Green Data Center Program Alan Crosswell

03/18/093

CUIT Data Center

• Architectural– Built in 1963, updated somewhat in the 1980's.– 4100 sf raised floor machine room space.– 1750 sf raised floor space, now offices for GSB IT.– 12” raised floor– Adequate support spaces nearby

• Staff• Staging• Storage• Mechanical & fire suppression• (future) UPS battery room

Page 4: 03/18/09 Green Data Center Program Alan Crosswell

03/18/094

CUIT Data Center

• Electrical– Supply: 3-phase 208V from automatic

transfer switch.– Distribution: 208V to wall-mounted panels;

120V to most servers.– No central UPS; lots of rack-mounted units.– Generator: 1750 kW shared with other

users & at capacity.– No metering. (Spot readings every decade

or so:-)– IT demand load tripled from 2001-2008

Page 5: 03/18/09 Green Data Center Program Alan Crosswell

CUIT Data Center

03/18/095

Page 6: 03/18/09 Green Data Center Program Alan Crosswell

03/18/096

CUIT Data Center

• Mechanical– On floor CRAC units served by campus

chilled water.– Also served by backup glycol dry coolers.– Supplements a central overhead chilled air

system.– Heat load is shared between the overhead

and CRAC.– No hot/cold aisles– Rows are in various orientations– Due to tripling of demand load, the backup

(generator-powered) CRAC units lack sufficient capacity.

Page 7: 03/18/09 Green Data Center Program Alan Crosswell

• IT systems– A mix of mostly administrative (non-

research) systems.– Most servers dual-corded 120V

power input.– Many ancient servers.– Due to lack of room UPS, each rack

has UPSes taking up 30-40% of the space.

– Lots of spaghetti in the racks and under the floor.

03/18/097

CUIT Data Center

Page 8: 03/18/09 Green Data Center Program Alan Crosswell

CUIT Data Center

03/18/09

Power Use Efficiency (PUE) =3.78This is a SWAG!

8

Page 9: 03/18/09 Green Data Center Program Alan Crosswell

LBNL Average PUE for 12 Data Centers

03/18/09

Power Use Efficiency (PUE) =2.17

9

Page 10: 03/18/09 Green Data Center Program Alan Crosswell

03/18/0910

Other Data Centers around Columbia• Many school, departmental & research server

rooms all over the place.– Range from huge (7,000 sf C2B2)… to tiny (2-3 servers in a closet)– Several mid-sized (Physics, Biology, CS,

APAM, CUIT backup DC, OAD, CUF, CUMCIT, Lamont, Nevis, etc.)

• Most lack electrical or HVAC backup.• Many could be better used as academic space

(labs, offices, classrooms).• Growth in research HPC putting increasing

pressure on these server rooms.• Lots of wasted money building new server

rooms for HPC clusters that are part of faculty startup packages, etc.

Page 11: 03/18/09 Green Data Center Program Alan Crosswell

Making the server slice bigger, the pie smaller and green.

• Reduce the PUE ratio by improving electrical & mechanical efficiency.– Google claims a PUE of 1.2

• Consolidate data centers (server rooms)– Claimed more efficient when larger (prove it!)– Free up valuable space for wet labs, offices,

classrooms.• Reduce the overall IT load through

– Server efficiency (newer, more efficient hardware)– Server consolidation & sharing

• Virtualization• Shared research clusters

• Move servers to a zero-carbon data center

03/18/0911

Page 12: 03/18/09 Green Data Center Program Alan Crosswell

Data center electrical best practices

• 95% efficient 480V room UPS– Basement UPS battery room– vs. wasting 40% of rack space

• 480V distribution to PDUs at ends of rack rows– Transformed to 208/120V at PDU– Reduces copper needed, transmission losses

• 208V power to servers vs. 120V– More efficient (how much?)

• Variable Frequency Drives for cooling fans and pumps– Motor power consumption increases as the cube of

the speed.• Generator backup

03/18/0912

Page 13: 03/18/09 Green Data Center Program Alan Crosswell

Data center mechanical best practices

• Air flow – reduce mixing, increase delta-T– Hot/cold or double hot aisle separation– 24-36” under floor plenum– Plug up leaks in floor– Increase temperature– Perform CFD modeling

• Alternative cooling technique: In-row or in-rack cooling– Reduces or eliminates hot/cold air mixing– More efficient transfer of heat (how much?)– Supports much higher power density– Water-cooled servers are making a comeback

03/18/0913

Page 14: 03/18/09 Green Data Center Program Alan Crosswell

Data center green power best practices

• Locate data center near a renewable source– Hydroelectric power in Canada– Wind power – but most wind farms lack transmission

capacity.• 40% of power is lost in transmission. So bring the

servers to the power.• Leverages our international high speed data networks

• Use “free cooling” (outside air)– Stanford proposal will free cool almost always

• Implement “follow the Sun” data centers– Move the compute load to wherever the greenest

power is currently available.

03/18/0914

Page 15: 03/18/09 Green Data Center Program Alan Crosswell

Energy Saving Best Practices

• Efficient lighting, HVAC, windows, appliances, etc.– LBNL and other nations’ 1W standby power proposals

• Behavior modification– Turn off the lights!– Enable power-saving options on computers– Social experiment in Watt Hall

• Co-generation– Waste heat is recycled to generate energy– Planned for Manhattanville campus– Possibly for Morningside campus

• Columbia participation in PlaNYC

03/18/0915

Page 16: 03/18/09 Green Data Center Program Alan Crosswell

Barriers to implementing best practices

• Capital costs• Short-term and parochial thinking• Saving electricity is not well incented as nobody is billed

for their electrical use.• Distance

– Synchronous writes for data replication are limited to about 30 miles

– Bandwidth*delay product impact on transmission of large amounts of data

– Reliability concerns– Server hugging– Staffing needs

03/18/0916

Page 17: 03/18/09 Green Data Center Program Alan Crosswell

Key Recommendations from Bruns-Pak 2008 Study

• Allocate currently unused spaces for storage, UPS, etc.• Consolidate racks to recapture floor space• Generally improve redundancy of electrical & HVAC• Upgrade electrical systems

– 750 kVA UPS module– New 480V 1500 kVA service– Generator improvements

• Upgrade HVAC systems– 200-ton cooling plant– VFD pumps & fans– Advanced control system

03/18/0917

Page 18: 03/18/09 Green Data Center Program Alan Crosswell

03/18/0918

Future State Goals – Next 5 years

• Begin phased upgrades of the Data Center to Improve power and space efficiency. Overall cost ~ $20M.

• Consolidate and replace pizza box servers with blades (& virtualization).

• Consolidate and simplify storage systems.• Accommodate growing demand for HPC research clusters

– Share clusters among researchers to be more efficient• Accommodate server needs of new Interdisciplinary

Science Building.• Develop internal cloud services.• Explore external cloud services.

– Stanford giving Amazon EC2 credits for faculty startup

Page 19: 03/18/09 Green Data Center Program Alan Crosswell

03/18/0919

Future State Goals – Next 5-10 years

• Build a new data center of 10,000-15,000 sf– Perhaps cooperatively with others– Not necessarily in NYC– Possibly in Manhattanville

• Consolidate many small server rooms.• Significant use of green-energy cloud

computing resources.

From www.jiminypeak.com

Page 20: 03/18/09 Green Data Center Program Alan Crosswell

03/18/0920

Our NYSERDA Grant

• New York State Energy Research & Development Authority Program Opportunity Notice 1206

• ~$1.2M ($447K from NYSERDA awarded pending contract)

• Improve space & power efficiency of primarily administrative servers.

• Contribute to Columbia's PlaNYC carbon footprint reduction goal.

• Make room for shared research computing in the existing data center.

• Measure and test vendor claims of energy efficiency improvements.

Page 21: 03/18/09 Green Data Center Program Alan Crosswell

03/18/0921

Our NYSERDA Grant– Specific Tasks

• Identify 30 old servers to replace.• Instrument server power consumption and data center

heat load. “Real time” with SNMP.• Establish PUE profile (use DoE DC Pro survey tool)• Implement 9 racks of high-density cooling (in-row/rack).• Implement proper UPS and high-voltage distribution.• Compare old & new research clusters' power consumption

for the same workload.• Implement advanced server power management and

measure improvements.• Review with internal, external and research faculty

advisory groups.• Communicate results.

Page 22: 03/18/09 Green Data Center Program Alan Crosswell

Measuring Power Consumption• Use SNMP which enables comparison with other metrics

03/18/0922

Page 23: 03/18/09 Green Data Center Program Alan Crosswell

Measuring Power Consumption

• Can measure power use with SNMP at:– Main electrical feeder,

panels, subpanels, circuits.– UPS– Power strips– Some servers– HP c7000 blade chassis

SNMP MIB includes• Power supply load• Cooling fan airflow• etc.

03/18/0923

Page 24: 03/18/09 Green Data Center Program Alan Crosswell

Measuring Heat Rejection

• Data Center chilled water goes through a plate heat exchanger to the campus chilled water loop.

• Measure the amount of heat rejected to the campus loop with SNMP instrumented:– BTU meter & – Flow meter

03/18/0924

Page 25: 03/18/09 Green Data Center Program Alan Crosswell

Next Steps Beyond the NYSERDA Grant

• Develop a “shovel ready” plan to continue moving forward.– Includes Server & Storage replacement strategy

• Identify funding needs and opportunities– ISB Servers– Manhattanville Phase 1 Servers– Shared research cluster expansion– Possible grants:

• NIH Extramural Research Facilities and Improvement Program ($1-10M)

• DoE Information and Communication Facility Energy Efficiency (to be announced this month)

• Additional forthcoming stimulus (ARRA) grants

03/18/0925

Page 26: 03/18/09 Green Data Center Program Alan Crosswell

Thanks…

• CU Facilities: Steve Poller, Frank Mastromauro, Dave Carlson, Dominick Chirico, Clem Olivio, Frank Martin, Wil Elmes, Matt Early, Dave Forbes, Geoff Wiener, Joe Ienuso, Joe Mannino, Fran Fitzgerald, Frances Huppert, David Greenberg

• Office of the EVP for Research: Greg Culler, Victoria Hamilton, David Hirsh, Marie Tracy, Patty Valencia-Ferguson, Mario Reyes

• Bruns-Pak, Dell, HP, IBM• NYSERNet: Bob Bloom, Tim Lance• NYSERDA: Joe Borowiec• CUIT/SAS: Halayn Hescock, Rich Hall, Victor Warren, Ryan Abrecea, Tony Cirillo, all

of Systems & Network Engineering, Candy Fleming, Donna Sadlon, Jeff Scott, John Milnes

• External visiting committee: Marilyn McMillan (NYU), Vace Kundakci (CCNY), Lauri Kerr (NYC), Tim Lance (NYSERNet),

• Internal advisory group: Wil Elmes, Art Langer, Nilda Mesa, Scott Norum, Len Peters• Research faculty user group: Liam Paninski, Lei Cong, Greg Bryan, Kathryn

Johnston, Mary Putman

This work is supported in part by the New York State Energy Research and Development Authority.

03/18/0926