19
30. September 2010 1 SM Seminar am 30. September 2010 in Wetzlar Nachhaltigkeit durch UPI (Unified Physical Infrastructure) Vortrag von Stefan Fammler: Energieeffiziente Strategien im Rechenzentrum SM Lars-Hendrik Thom Regional Technical Support Manager D-A-CH & Benelux Energy-efficient & scalable data centerinfrastructure design Stefan Fammler StreategicAccountManager

Energy-efficient & scalable data centerinfrastructure design

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

1

SM

Seminar am 30. September 2010 in Wetzlar

Nachhaltigkeit durch UPI (Unified Physical Infrastructure)

Vortrag von Stefan Fammler:

Energieeffiziente Strategien im Rechenzentrum

SM

Lars-Hendrik Thom

Regional Technical Support Manager D-A-CH & Benelux

Energy-efficient & scalable

data center infrastructure design

Stefan FammlerStreategic Account Manager

Page 2: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

2

SM

Computing Security ControlPower

Most businesses maintained the various system physical infrastructures in silos

Yesterday’s Physical Infrastructure

Communication

SM

Demand for IP Communications is leading to convergence of these systems

Convergence requires physical and logical system integration, which affects system performance

The Actual Trend of Infrastructure Systems

Page 3: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

3

SM

The Physical Infrastructure VisionAlign Merge Optimize

www.panduit.com/upi

SM

Infrastructure Risk Management

The complexities of convergence create

risk in physical layer

“An IT Risk incident has the potential

to produce substantial business

consequences that touch a wide

range of stakeholders. In short,

IT risk matter – now more than ever!”

IT Risk ”Turning Business Threats into Competitive

Advantage” Harvard Business Press

Page 4: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

4

SM

Integration & Interdependence

Effective infrastructure management

reduces risk throughout architecture

“As investment in integration technology

increases, IT organisations will continue to

evolve their enterprise-wide integration

infrastructure to handle user interaction,

business process, applications and data.”

Colin White, BI Research

SM

IT risk matter – now more than ever!”

• Most of the businesses today need already a solid IT,

but redundancy concepts allow to handle the risk of

loosing data.

• IT infrastructure tomorrow is more – all systems in a

building will use it. Systems, which may have no

possibility to get a backup…

• It makes sense to have a closer look at the „passive“

components and to invest now in clever solutions.

Page 5: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

5

SM

Physical IT Infrastructure Design should be

• Energy-efficient

– using „intelligent“ cooling systems

– support the most effective usage

of the cooling power

• Scalable and Reliable

– save investment

– react fast and easy on

business and technology changes

SM

...the journey from the coal to the server

16 MWcoal energy

5 5 -- 10% loss for 10% loss for distribution and distribution and

transformationtransformation

60% efficiency loss60% efficiency loss

~5 MWusable energy

65% cooling and power transformation

15 - 30% loss of power supplies and devices

~0,3 – 0,5 MWused for

„computing“

Page 6: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

6

SM

Increasing Heat Load

2002:2002:

INTEL Pentium 3, 1 GHz

max. power consumption 26W

2004:2004:

INTEL Pentium 4, 3 GHz

max. power consumption 83W

2007:2007:

INTEL Xenon, 3,6 GHz

max. power consumption 130W

More speed and higher performance traditionally result in a higher energy consumption...

SM

How to fight temperature raises…?

• Decrease the air temperature at the cold aisle

to better cool the components

– > higher energy consumption of the CRAC

• Increase the pressure of the cold air inside the

raised floor or speed up the cold airflow

– > higher energy consumption of the CRAC

• Invest in additional CRAC units

– > expensive invest AND

additional energy consumption

CRAC = Computer Room Air Condition

Page 7: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

7

SM

The better way to pay the bill...

Energy;

40%

Total cost of ownership for a DC

i.e.

~ 3 Mio € energy costs

Source: Uptime Institute

Cooling;

40%

Energy costs

~ 1.2 Mio € cooling energy costs

used for

cooling;

40%

bypass

airflow;

60%

Cooling efficiency

~ 720.000 € loss per year!

Increasing the cooling

efficiency could easy save

several 100.000 € per year.

bypass airflow:

“short cuts” between the cold and hot regions

at a DC without cooling active components

SM

Energy-efficient design

1) Clear and consequent hot/cold aisle setup

• Orientation of cabinets front to front with no exception

• Single height cabinets and closed rows of cabinets to

clearly separate hot and cold aisles

• Keep space for additional cooling options (if later necessary)

Source: Emerson Liebert

Page 8: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

8

SM

Closed rows of cabinets

• Vertical blanking panels at the

outside of the 19” frames

• Easy to install horizontal

blanking panels

• Transformable cabinets to

keep them in place if the

usage changes (i.e. Network, SAN, Server etc.)

SM

Keep Space for additional cooling…

• If the heat load exceeds react - even if the DC need to

stay under full operation - with the installation of a

i.e.:

– cold aisle containment system

– hot aisle containment system

– dedicated water based heat exchangers

Page 9: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

9

SM

Cold Aisle Containment

• Separates the hot air from the cold aisle

and makes sure there‘s no mixture above

the cabinets and at the end of the rows

• Some recommendations:

– Easy to install at cabinets in operation

– Wide to open & self closing doors

– Easy to open transparent top panel for access

to cable paths above cabinets

– No loose parts and stable enough to handle the air pressure inside the

cold aisle

SM

Hot Aisle Containment

• Separates the hot air from the cold aisle by

channelizing the exhausted air from the

components directly back to the CRAC unit.

• No mixture of cold and hot air results in higher possible cold

aisle temperatures up to about 25°C (depends on the

component requirements)

• better operational efficiency of the CRAC unit due to a higher

temperature difference between cold and hot air

Page 10: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

10

SM

Partial Hot Aisle Containment

• If the room has just enough height already the add of a

vertical exhaust channel results in a better cooling

performance – ideal solution for fast help if the heat load

exceed at one single cabinet

i.e. at the end of a row

How does it work?

The hot air is blown out higher into the room and

the resulting sphere of hot air is staying more away

from the cold air sphere.

This reduces the mixture of cold and hot air.

SM

Additional cooling with passive heat exchanger

• If “Water at the DC” is not an issue for you – prepare water

pipes inside the raised floor at areas you plan to use for high

performance components (blade server etc.)

• easy “on demand” installation of a passive water based heat

exchanger to the rear side of a cabinet - connected to the low

pressure chilled water circle of the CRAC unit.

• Up to 20kW additional cooling power

at the single cabinet (about 25-30kW in total)

• No moving parts reduces costs for

maintenance and additional energy

consumption

• Should be easy to install in full

operation of the cabinet

Page 11: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

11

SM

Energy-efficient design

2) Optimize the airflow within the DC

(for new and existing ones)

• Use solutions to proper route the airflow inside the

cabinets and racks

• Eliminate any blockings for airflow

• (Re-)calculate the number and the opening ratio of the

perforated tiles the necessary amount

• Close all other openings to keep the air pressure inside

the raised floor

• Limit the length of cabinet rows to get enough cold air

even to the last cabinet (or if air comes from 2 sides at the middle)

Source: Emerson Liebert

SM

From „right -> left“ to „front -> back“

• Exhaust ducts to route the hot air of right to left

blowing switches to the rear of a cabinet

NET-ACCESS™ Cabinetwith exhaust duct

Cabinet withoutExhaust duct

Page 12: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

12

SM

Rear side mounted ToR switches

• Use cold air inlet ducts to

provide ToR switches with

enough cold air at their inlets

Front closed with blind panels

Top of Rack Switch

Hot exhaust air of the server is

flowing evento the cold

air inlets of the switch

Server

Switch Air Te mpe ratur e

20.0

25.0

30.0

35.0

40.0

45.0

50.0

55.0

60.0

60% 75% 100%

Fan Spe ed

Tem

pera

ture

(C

)

No Duct

Duct

Switch Air Te mpe ratur e

20.0

25.0

30.0

35.0

40.0

45.0

50.0

55.0

60.0

60% 75% 100%

Fan Spe ed

Tem

pera

ture

(C

)

No Duct

Duct

Withoutinlet duct

Withcold air inlet duct

SM

Different Inlet Air Duct SamplesFront to back airflow

at a Cisco Nexus 2k

air duct extension to the

front of the cabinet

Air inlets at the side of a Cisco 4948

Inlet and exhaust air ducts for Cisco‘s Nexus 7018

Page 13: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

13

SM

Eleminate airflow blockings

• Airflow blockings occur:

– inside the raised floor due to pipes and cable mess

– behind servers due to flexible cable arms or even

cable mess

– At the front of the components

due to many connected patch cords

-> the main reason for airflow blocking

is often a weak cable management

SM

Design clear paths inside the raised floor

• The raised floor should be high enough to provide the

necessary cold air pressure everywhere with the same level

(recommended 60-90 cm).

• If cables are routed inside the raised floor

it should be done on dedicated path ways,

which allow the airflow above and below.

• Under floor cable pathway systems

should be easy to install under

operation (with minimized opening of tiles)

to react on upcoming business

demands for additional connections

Page 14: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

14

SM

Overhead Cable Routing Systems

• To minimize the risk of air flow blocking inside the raised floor use for the majority of cables the space above the cabinets – if possible.

• Separate physically the more sensitive fiber patch cords from the copper cabling

• Look for a system that provides continuous bend radius control- for copper too!

• Easy access grants later the usage and minimizes “work arounds”

SM

Cabel Management ./. Air Exhaust

• Even a clean looking cable management can cause heat

problems at server cabinets.

• Flexible cable arms to slide a server out of the rack under

operation become more and more useless due to the growing

server virtualisation

– Virtual machines can be moved

from one physical machine to an

other

– Not necessary anymore to slide

out a physical server under

operation to change or repair

parts inside the chassis.

Page 15: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

15

SM

Cable Management at RU level

• Vertical cable management supported by a finger system at

the side of the 19“ frame allows to proper route cables and

overlenghts away from the back side of the

server –> minimized blocking of the air exhaust

• Side mounted patch panel allow also the usage of

equal long patch cords –> less stock.

• Look for dedicated cable path ways

inside the server cabinets to

separate cables by function. (LAN A / LAN B / OBM / SAN)

This minimizes cable mess and

makes moves and changes easier.

SM

Cable Management at the front

• Eliminate horizontal cable manager with

a vertical cable management.

• Fingers for each RU at the left and right

side allow to handle overlength at the

open space beside the 19“ frame

• Look for angled panels, which

also guide people to route patch

cords to both sides and not

crossing the middle of the panel

-> minimized cable mess, over length at side

-> easy operation, easy MAC

-> no air flow blocking at the cold air inlets

Page 16: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

16

SM

Energy-efficient design

2) Optimize the airflow within the DC

• Route airflow front to back inside the cabinets

• Eliminate any blockings for airflow

• Calculate the number and the opening ratio of the

perforated tiles the necessary amount

• Close all other openings to keep the air pressure inside

the raised floor

• Limit the length of cabinet rows to get enough cold air

even to the last cabinet (or if air comes from 2 sides at the middle)

Source: Emerson Liebert

SM

Perforated tiles

• A typical DC is providing the cold air trough the raised floor.

To bring the cold air to the front of the cabinets and to the

inlet openings of the components, perforated tiles are

positioned in front of the cabinet.

• The correct amount and position should be

calculated to compensate only the expected

heat load per cabinet/row.

• To many openings result in a loss of cold air

pressure inside the raised floor –

To compensate this loss you need to increase

the pressure at the CRAC units –> increased energy costs.

Page 17: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

17

SM

Close all other openings…

• As mentioned – with each open hole at the raised

floor you reduce the air pressure and need to

compensate this with higher energy consumption of

the CRAC unit.

• Close all unwanted holes – especially at cable

throughputs – the more airtight the better!

SM

Closed but still easy to operate...?

• Brush systems to close cable opening are very popular

because they are easy to install and allow a fast an easy add

or remove of cables

But brush locks still let air flow trough the opening and are

not able to keep the air pressure.

• An alternative solution are closed air bags,

which can also be installed around

existing cable bundles and closed the

hole AND keep the air pressure!

MACs are almost as easy as with brush locks.

Page 18: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

18

SM

The lenght of a row...

• The length of a row of cabinets depends on the

capacity and position of the CRAC units

• The longer a row the better

should be the air flow

control to provide enough

cold air at any RU.

SM

Summary

An optimized air flow and energy-efficient

design creates more options and headroom

for the specific data center infrastructure.

•Energy-efficiency and airflow

control help to save energy

and to reduce costs.

Page 19: Energy-efficient & scalable data centerinfrastructure design

30. September 2010

19

SM

Thank you very much Thank you very much

for your attention!for your attention!

More detailed information atMore detailed information at

www.panduit.comwww.panduit.com

Do you have a question?