1
EXPERT I VIEW VIEW I EXPERT September 2011 www.datacentresols.com | DATA CENTRE SOLUTIONS | 13 12 | DATA CENTRE SOLUTIONS | www.datacentresols.com September 2011 In the beginning… There have been many advances in the IT and Telecommunications field in recent years which have affected the way we manage our technology infrastructures. Perhaps, one of the most important to the data center is the orderly distribution of computing tasks throughout a common set of computer resources, not necessarily collocated in a particular room, building, or even geographical location. Better known today as “Cloud Computing”, it is a reference to the term so widely used for the collective of internet connections and traffic, or “The Cloud”. The idea of distributed computing is of course not new, as this was the way in which computing worked in the early days. Before the advent of the Personal Computer, programs and data were stored in central computers, mostly mainframes, which were accessed and exploited via mostly “dumb” terminals, VT-100s, a moniker still often in use today when we use the “dot prompt”. That was followed by the hosting of our basic IT infrastructure and software programs on our company’s servers, and additional distributed software engines, programs, databases, and services hosted on third party servers. Of course, that was just the beginning and yet to develop into the Cloud Computing model of today. The Cloud… The idea of Cloud Computing, using the “collective power” of underutilized Supporting The Cloud In the first of a regular column being written for DCS, Carlos “DataCenterCarl” Garcia de la Noceda, Deerns Global Data Center Practice Manager for the USA and the Americas, looks at designing a proper mechanical and electrical infrastructure to support a Cloud-based Data Center. computing resources and spare CPU capacity to efficiently “squeeze” PC and Server use is not new. Those who have been in this field for some time, will remember the SETI1 Project. To overcome the challenge to get super computer time for their space algorithm calculations, the SETI team came up with a pretty clever, and now evidently visionary, solution. They created a small program which would execute in the background of a person’s PC whenever it was not in use. When the user was connected to the Internet, mostly through Dial-Up connections at that time, the program downloaded mathematical routines, pieces of a larger routine, which would use the volunteer’s spare CPU capacity (back in the 1990s that was not too much!) to perform calculations. Once these were accomplished, and when the volunteer next connected to the Net, the results would be sent back to SETI systems. Today’s Cloud Computing has taken this to the next level, with ever increasing computing power, memory resources, and permanent broadband connections. In the Cloud Data Center, servers are no longer dedicated slaves to a single application, database, or group of users. The Cloud server is a resource that can be almost idle in the morning, and over taxed in the evening. Global companies are adopting a “follow the sun” strategy with their resources to ensure the customer base will always have the resources “closest to them”, transparently and with minimum latency. This strategy also allows best use of human resources, with full staff in the daytime and minimum staff in the nighttime. This wide range of server use, does pose a challenge, particularly to the Data Center infrastructure that supports it. The need for IT power and cooling varies proportionately, and currently most Data Centers are not equipped to adequately support it. 1 SETI (Search for Extra Terrestrial Intelligence) Institute (www.seti.org) Traditional Data Centers are spaces for housing dedicated use computer equipment, servers, storage devices, telecommunications equipment, and the like. These are generally assigned to a specific task, application, or user group. They are assigned to a specific location and their power and cooling requirements are mostly static, changing, if any, over time. Data Center mechanical and electrical infrastructure supporting these systems can be dimensioned accordingly in a mostly pre-planned way. There is no need to constantly re-evaluate if there will be enough IT power and cooling to support equipment needs. Cloud Computing changes all that… The modern, Cloud supporting Data Center needs to be as intelligent and as dynamic as the systems it houses. There is a need to comprehensive monitoring and control of mechanical and electrical infrastructures to ensure the dynamic demands for electrical power and equipment cooling are met at the same rate of change as the servers themselves. No longer will a static infrastructure suffice to meet these needs. When servers increase their CPU use, they begin to require additional power for each of their subsystems, starting with the CPU through to their on-board fans. The additional power and computing “effort” translates into increased heat generation. Increased heat generation by fully utilized servers will significantly change the heat signature or hot spots throughout the hot floor. This means that not only is there an increased need for cooling, but the specific location where that cold air needs to be delivered changes dynamically as well. The use of robust monitoring systems is the first step towards effective management of the Cloud Data Center’s mechanical and electrical infrastructures, the very infrastructures which will support the ever changing environment of Cloud Computing. Systems must be in place that can detect, perhaps even predict, significant changes in server activity and its associated draw on electrical power. It is indeed the first step, but not by any means the solution, to keeping the Cloud Data Center out of “hot water”. These monitoring systems need to be coupled with the proper control mechanisms that will send the proper instructions to electrical and cooling infrastructures to dynamically adapt to support the new requirements. Doing it right… The “secret sauce” to a Cloud Data Center with a sound M&E infrastructure is in the Design. A proper design will support these changes in requirements, adapt to them, and do so in a straightforward, robust, but energy efficient way. Pretty Poor Planning, means Pretty Poor Execution. A properly planned Mechanical and Electrical Design of the Data Center infrastructures, focused on Cloud Computing operations, will define the best monitoring and control systems, coupled with the equipment whose specifications can support the dynamic demands which will be required by the controls systems. Power systems, transformers, UPS’s, Generator Sets, HVAC, Plate Heat Exchanger Cooling, Fire Suppression Systems, are all commodities, these can be purchased from several international firms with years of experience and successful installation throughout the world. Once the size and features have been defined, the purchase decision is usually made on cost, support at a particular location, reputation, availability, and the like. The significant difference in the operational success of a Cloud Data Center, of any Data Center for that matter, is not in the equipment installed as such, but in the proper planning and execution of its design. The design team must be comprised of expert mechanical, electrical, plumbing, IT, and other discipline professionals who understand the Data Center for what it is, a highly specialized environment unlike any other, where very expensive and often temperamental equipment is housed for the purpose of handling sensitive customer information, which can often make or break their business. What is at risk is nothing less than the business of each and every tenant of the Data Center, and of the Data Center company itself. To conclude… In a world obsessed with gadgets and technology equipment, proper preparedness and design is often overlook, or undervalued. In fact, it is in the design phase that the potentially most costly mistakes will be identified and corrected, before they ever happen. Ensuring correct design and specifications before the data center has been built and the equipment installed, before it’s too late and too costly for changes, is what every data center owner should strive for. A properly designed data center with adequate power, cooling, and space, will ensure an effective and profitable operation for a long time. About the new DCS contributor Dr. Garcia de la Noceda is an IT and telecommunications professional with over 22 years of indepth experience managing complex systems, services and solutions in the government and enterprise sectors in Europe and the USA. His experience has been gained with international industry leaders such as Computer Sciences Corporation (CSC), BBN (Verizon), Amdocs, Lucent, Terremark (Verizon) and The Climate Project. Mr. Garcia de la Noceda holds a Doctorate degree in Political Science, and an MBA focused on Information Systems. He is currently the Deerns Global Data Center Practice Manager for the USA and the Americas. Deerns is a Netherlands based Engineering Consulting Company with over 80 years of experience, which focuses on critical infrastructures such as Data Centers, Call Centers, Operations Centers, Airports, Hospitals, Clean Rooms, and Level 3 & 4 Bio Hazard Labs. Deerns’ approach is particularly focused on energy efficiency, reliability, and growth.

Supporting Cloud Article

Embed Size (px)

Citation preview

Page 1: Supporting Cloud Article

EXPERT I VIEW VIEW I EXPERT

September 2011 www.datacentresols.com | DATA CENTRE SOLUTIONS | 1312 | DATA CENTRE SOLUTIONS | www.datacentresols.com September 2011

In the beginning…There have been many advances in the IT and Telecommunications field in recent years which have affected the way we manage our technology infrastructures. Perhaps, one of the most important to the data center is the orderly distribution of computing tasks throughout a common set of computer resources, not necessarily collocated in a particular room, building, or even geographical location. Better known today as “Cloud Computing”, it is a reference to the term so widely used for the collective of internet connections and traffic, or “The Cloud”.

The idea of distributed computing is of course not new, as this was the way in which computing worked in the early days. Before the advent of the Personal Computer, programs and data were stored in central computers, mostly mainframes, which were accessed and exploited via mostly “dumb” terminals, VT-100s, a moniker still often in use today when we use the “dot prompt”. That was followed by the hosting of our basic IT infrastructure and software programs on our company’s servers, and additional distributed software engines, programs, databases, and services hosted on third party servers. Of course, that was just the beginning and yet to develop into the Cloud Computing model of today.

The Cloud…The idea of Cloud Computing, using the “collective power” of underutilized

Supporting The CloudIn the first of a regular column being written for DCS, Carlos “DataCenterCarl” Garcia de la Noceda, Deerns Global Data Center Practice Manager for the USA and the Americas, looks at designing a proper mechanical and electrical infrastructure to support a Cloud-based Data Center.

computing resources and spare CPU capacity to efficiently “squeeze” PC and Server use is not new. Those who have been in this field for some time, will remember the SETI1 Project. To overcome the challenge to get super computer time for their space algorithm calculations, the SETI team came up with a pretty clever, and now evidently visionary, solution. They created a small program which would execute in the background of a person’s PC whenever it was not in use. When the user was connected to the Internet, mostly through Dial-Up connections at that time, the program downloaded mathematical routines, pieces of a larger routine, which would use the volunteer’s spare CPU capacity (back in the 1990s that was not too much!) to perform calculations.

Once these were accomplished, and when the volunteer next connected to the Net, the results would be sent back to SETI systems.Today’s Cloud Computing has taken this to the next level, with ever increasing computing power, memory resources, and permanent broadband connections. In the Cloud Data Center, servers are no longer dedicated slaves to a single application, database, or group of users. The Cloud server is a resource that can be almost idle in the morning, and over taxed in the evening. Global companies are adopting a “follow the sun” strategy with their resources to ensure the customer base will always have the resources

“closest to them”, transparently and with minimum latency.

This strategy also allows best use of human resources, with full staff in the daytime and minimum staff in the nighttime. This wide range of server use, does pose a challenge, particularly to the Data Center infrastructure that supports it. The need for IT power and cooling varies proportionately, and currently most Data Centers are not equipped to adequately support it.1 SETI (Search for Extra Terrestrial Intelligence) Institute (www.seti.org)Traditional Data Centers are spaces for housing dedicated use computer equipment, servers, storage devices, telecommunications equipment, and the like.

These are generally assigned to a specific task, application, or user group. They are assigned to a specific location and their power and cooling requirements are mostly static, changing, if any, over time. Data Center mechanical and electrical infrastructure supporting these systems can be dimensioned accordingly in a mostly pre-planned way. There is no need to constantly re-evaluate if there will be enough IT power and cooling to support equipment needs.

Cloud Computing changes all that…The modern, Cloud supporting Data Center needs to be as intelligent and as dynamic as the systems it houses.

There is a need to comprehensive monitoring and control of mechanical and electrical infrastructures to ensure the dynamic demands for electrical power and equipment cooling are met at the same rate of change as the servers themselves. No longer will a static infrastructure suffice to meet these needs.

When servers increase their CPU use, they begin to require additional power for each of their subsystems, starting with the CPU through to their on-board fans. The additional power and computing “effort” translates into increased heat generation.

Increased heat generation by fully utilized servers will significantly change the heat signature or hot spots throughout the hot floor. This means that not only is there an increased need for cooling, but the specific location where that cold air needs to be delivered changes dynamically as well.

The use of robust monitoring systems is the first step towards effective management of the Cloud Data Center’s mechanical and electrical infrastructures, the very infrastructures which will support the ever changing environment of Cloud Computing. Systems must be in place that can detect, perhaps even predict, significant changes in server activity and its associated draw on electrical power.

It is indeed the first step, but not by any means the solution, to keeping the Cloud Data Center out of “hot water”. These monitoring systems need to be coupled with the proper control mechanisms that will send the proper instructions to electrical and cooling infrastructures to dynamically adapt to support the new requirements.

Doing it right…The “secret sauce” to a Cloud Data Center with a sound M&E infrastructure is in the Design. A proper design will support these changes in requirements, adapt to them, and do so

in a straightforward, robust, but energy efficient way. Pretty Poor Planning, means Pretty Poor Execution.

A properly planned Mechanical and Electrical Design of the Data Center infrastructures, focused on Cloud Computing operations, will define the best monitoring and control systems, coupled with the equipment whose specifications can support the dynamic demands which will be required by the controls systems.

Power systems, transformers, UPS’s, Generator Sets, HVAC, Plate Heat Exchanger Cooling, Fire Suppression Systems, are all commodities, these can be purchased from several international firms with years of experience and successful installation throughout the world. Once the size and features have been defined, the purchase decision is usually made on cost, support at a particular location, reputation, availability, and the like.

The significant difference in the operational success of a Cloud Data Center, of any Data Center for that matter, is not in the equipment installed as such, but in the proper planning and execution of its design. The design team must be comprised of expert mechanical, electrical, plumbing, IT, and other discipline professionals who understand the Data Center for what it is, a highly specialized environment unlike any other, where very expensive

and often temperamental equipment is housed for the purpose of handling sensitive customer information, which can often make or break their business.

What is at risk is nothing less than the business of each and every tenant of the Data Center, and of the Data Center company itself.

To conclude…In a world obsessed with gadgets and technology equipment, proper preparedness and design is often

overlook, or undervalued. In fact, it is in the design phase that the potentially most costly mistakes will be identified and corrected, before they ever happen.

Ensuring correct design and specifications before the data center has been built and the equipment installed, before it’s too late and too costly for changes, is what every data center owner should strive for. A properly designed data center with adequate power, cooling, and space, will ensure an effective and profitable operation for a long time.

About the new DCS contributorDr. Garcia de la Noceda is an IT and telecommunications professional with over 22 years of indepth experience managing complex systems, services and solutions in the government and enterprise sectors in Europe and the USA. His experience has been gained with international industry leaders such as Computer Sciences Corporation (CSC), BBN (Verizon), Amdocs, Lucent, Terremark (Verizon) and The Climate Project. Mr. Garcia de la Noceda holds a Doctorate degree in Political Science, and an MBA focused on Information Systems. He is currently the Deerns Global Data Center Practice Manager for the USA and the Americas. Deerns is a Netherlands based Engineering Consulting Company with over 80 years of experience, which focuses on critical infrastructures such as Data Centers, Call Centers, Operations Centers, Airports, Hospitals, Clean Rooms, and Level 3 & 4 Bio Hazard Labs. Deerns’ approach is particularly focused on energy efficiency, reliability, and growth.